Feb 17 15:54:46 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 15:54:46 crc restorecon[4710]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:46 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 15:54:47 crc restorecon[4710]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 15:54:48 crc kubenswrapper[4829]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.018853 4829 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029283 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029341 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029353 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029363 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029372 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029380 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029389 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029396 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029407 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029419 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029427 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029436 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029444 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029453 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029461 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029470 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029479 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029488 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029496 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029503 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029511 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029519 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029528 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029536 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029544 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029553 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029560 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029568 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029605 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029613 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029620 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029630 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029647 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029655 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029664 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029672 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029680 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029689 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029696 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029704 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029711 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029719 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029726 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029734 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029741 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029749 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029757 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029765 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029774 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029783 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029790 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029797 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029807 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029815 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029823 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029830 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029839 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029847 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029855 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029862 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029870 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029877 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029887 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029897 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029907 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029915 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029923 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029933 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029943 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029952 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.029959 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030949 4829 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030971 4829 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030986 4829 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.030998 4829 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031009 4829 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031018 4829 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031029 4829 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031040 4829 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031050 4829 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031059 4829 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031069 4829 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031080 4829 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031090 4829 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031099 4829 flags.go:64] FLAG: --cgroup-root="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031108 4829 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031117 4829 flags.go:64] FLAG: --client-ca-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031126 4829 flags.go:64] FLAG: --cloud-config="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031135 4829 flags.go:64] FLAG: --cloud-provider="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031144 4829 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031155 4829 flags.go:64] FLAG: --cluster-domain="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031164 4829 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031173 4829 flags.go:64] FLAG: --config-dir="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031182 4829 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031191 4829 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031202 4829 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031211 4829 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031220 4829 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031229 4829 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031238 4829 flags.go:64] FLAG: --contention-profiling="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031248 4829 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031256 4829 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031266 4829 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031274 4829 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031285 4829 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031295 4829 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031304 4829 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031312 4829 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031321 4829 flags.go:64] FLAG: --enable-server="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031330 4829 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031344 4829 flags.go:64] FLAG: --event-burst="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031354 4829 flags.go:64] FLAG: --event-qps="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031363 4829 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031372 4829 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031381 4829 flags.go:64] FLAG: --eviction-hard="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031391 4829 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031400 4829 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031409 4829 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031419 4829 flags.go:64] FLAG: --eviction-soft="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031428 4829 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031437 4829 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031446 4829 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031456 4829 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031465 4829 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031474 4829 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031482 4829 flags.go:64] FLAG: --feature-gates="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031493 4829 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031502 4829 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031511 4829 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031521 4829 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031530 4829 flags.go:64] FLAG: --healthz-port="10248" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031539 4829 flags.go:64] FLAG: --help="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031549 4829 flags.go:64] FLAG: --hostname-override="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031558 4829 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031567 4829 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031605 4829 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031615 4829 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031624 4829 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031633 4829 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031642 4829 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031650 4829 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031659 4829 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031671 4829 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031681 4829 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031690 4829 flags.go:64] FLAG: --kube-reserved="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031698 4829 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031707 4829 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031716 4829 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031725 4829 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031734 4829 flags.go:64] FLAG: --lock-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031743 4829 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031752 4829 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031761 4829 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031785 4829 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031795 4829 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031805 4829 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031814 4829 flags.go:64] FLAG: --logging-format="text" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031823 4829 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031833 4829 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031842 4829 flags.go:64] FLAG: --manifest-url="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031851 4829 flags.go:64] FLAG: --manifest-url-header="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031862 4829 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031871 4829 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031882 4829 flags.go:64] FLAG: --max-pods="110" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031891 4829 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031899 4829 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031909 4829 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031917 4829 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031927 4829 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031935 4829 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.031944 4829 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032897 4829 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032907 4829 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032917 4829 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032926 4829 flags.go:64] FLAG: --pod-cidr="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032935 4829 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032948 4829 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032957 4829 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032967 4829 flags.go:64] FLAG: --pods-per-core="0" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032976 4829 flags.go:64] FLAG: --port="10250" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032986 4829 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.032995 4829 flags.go:64] FLAG: --provider-id="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033004 4829 flags.go:64] FLAG: --qos-reserved="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033013 4829 flags.go:64] FLAG: --read-only-port="10255" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033022 4829 flags.go:64] FLAG: --register-node="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033031 4829 flags.go:64] FLAG: --register-schedulable="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033040 4829 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033055 4829 flags.go:64] FLAG: --registry-burst="10" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033064 4829 flags.go:64] FLAG: --registry-qps="5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033073 4829 flags.go:64] FLAG: --reserved-cpus="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033088 4829 flags.go:64] FLAG: --reserved-memory="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033099 4829 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033108 4829 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033118 4829 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033126 4829 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033135 4829 flags.go:64] FLAG: --runonce="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033144 4829 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033153 4829 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033164 4829 flags.go:64] FLAG: --seccomp-default="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033173 4829 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033182 4829 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033191 4829 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033200 4829 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033209 4829 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033218 4829 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033227 4829 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033236 4829 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033245 4829 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033254 4829 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033263 4829 flags.go:64] FLAG: --system-cgroups="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033272 4829 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033286 4829 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033294 4829 flags.go:64] FLAG: --tls-cert-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033303 4829 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033314 4829 flags.go:64] FLAG: --tls-min-version="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033323 4829 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033332 4829 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033341 4829 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033349 4829 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033358 4829 flags.go:64] FLAG: --v="2" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033370 4829 flags.go:64] FLAG: --version="false" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033381 4829 flags.go:64] FLAG: --vmodule="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033392 4829 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.033401 4829 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033631 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033642 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033652 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033661 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033669 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033679 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033687 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033696 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033705 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033712 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033721 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033731 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033740 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033749 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033757 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033765 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033773 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033781 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033791 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033800 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033809 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033817 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033826 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033835 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033843 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033851 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033858 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033867 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033875 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033883 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033891 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033899 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033907 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033916 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033923 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033931 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033939 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033948 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033957 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033966 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033975 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033983 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.033991 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034000 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034008 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034016 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034024 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034032 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034040 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034048 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034058 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034068 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034078 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034086 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034095 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034104 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034116 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034127 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034140 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034154 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034165 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034174 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034184 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034194 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034203 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034211 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034220 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034228 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034236 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034245 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.034253 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.034265 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.046313 4829 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.046362 4829 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046491 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046514 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046522 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046532 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046540 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046548 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046556 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046564 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046602 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046611 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046619 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046627 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046634 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046642 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046650 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046658 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046666 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046675 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046683 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046692 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046699 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046707 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046715 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046723 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046731 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046739 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046746 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046754 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046762 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046770 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046777 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046785 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046795 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046808 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046819 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046828 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046837 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046845 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046853 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046861 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046869 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046877 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046884 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046892 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046900 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046908 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046916 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046923 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046931 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046938 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046946 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046954 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046961 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046973 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046985 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.046998 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047011 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047024 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047037 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047046 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047054 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047063 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047071 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047080 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047088 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047097 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047105 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047113 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047121 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047129 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047138 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.047152 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047371 4829 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047383 4829 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047392 4829 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047401 4829 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047409 4829 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047417 4829 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047425 4829 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047432 4829 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047440 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047448 4829 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047456 4829 feature_gate.go:330] unrecognized feature gate: Example Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047463 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047471 4829 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047479 4829 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047489 4829 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047497 4829 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047506 4829 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047514 4829 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047521 4829 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047529 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047537 4829 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047545 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047552 4829 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047559 4829 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047567 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047607 4829 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047615 4829 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047626 4829 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047635 4829 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047644 4829 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047653 4829 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047660 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047668 4829 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047676 4829 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047688 4829 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047697 4829 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047706 4829 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047715 4829 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047723 4829 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047731 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047739 4829 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047746 4829 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047754 4829 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047762 4829 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047770 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047777 4829 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047785 4829 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047795 4829 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047805 4829 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047813 4829 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047822 4829 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047833 4829 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047841 4829 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047848 4829 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047856 4829 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047863 4829 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047871 4829 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047879 4829 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047887 4829 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047894 4829 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047902 4829 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047910 4829 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047917 4829 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047925 4829 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047932 4829 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047939 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047948 4829 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047956 4829 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047963 4829 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047973 4829 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.047984 4829 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.047999 4829 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.048223 4829 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.054208 4829 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.054326 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.055980 4829 server.go:997] "Starting client certificate rotation" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.056028 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.057439 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-24 01:48:50.912562874 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.057631 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.087021 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.090911 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.093717 4829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.113809 4829 log.go:25] "Validated CRI v1 runtime API" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.153307 4829 log.go:25] "Validated CRI v1 image API" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.156512 4829 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.161785 4829 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-15-49-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.161841 4829 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180203 4829 manager.go:217] Machine: {Timestamp:2026-02-17 15:54:48.177425026 +0000 UTC m=+0.594443014 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:420e9fca-55f5-42fc-a60a-919d603b95e0 BootID:e093bc13-e732-4259-b0a8-2325e80c34f5 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:26:91:8b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:26:91:8b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:91:01:36 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:31:97:72 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:de:60:64 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f2:de:06 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:32:8c:24:24:37 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:68:71:55:29:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180428 4829 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.180788 4829 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.181608 4829 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.181956 4829 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182006 4829 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182323 4829 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182343 4829 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.182989 4829 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.183046 4829 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.183876 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.184023 4829 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190852 4829 kubelet.go:418] "Attempting to sync node with API server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190887 4829 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190925 4829 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190946 4829 kubelet.go:324] "Adding apiserver pod source" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.190963 4829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.197549 4829 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.198725 4829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.199790 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.199888 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.199962 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.199992 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.201663 4829 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203277 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203307 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203317 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203327 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203342 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203351 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203361 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203377 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203388 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203399 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203413 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.203422 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.204317 4829 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.205164 4829 server.go:1280] "Started kubelet" Feb 17 15:54:48 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.207758 4829 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.207719 4829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.209168 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.209844 4829 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212170 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212328 4829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212353 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:47:57.847606568 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212910 4829 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.212942 4829 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.218451 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.218741 4829 server.go:460] "Adding debug handlers to kubelet server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.220180 4829 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.220526 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.219756 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189513b30e988654 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:54:48.20510882 +0000 UTC m=+0.622126818,LastTimestamp:2026-02-17 15:54:48.20510882 +0000 UTC m=+0.622126818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.223539 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.223906 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.228008 4829 factory.go:55] Registering systemd factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.228046 4829 factory.go:221] Registration of the systemd container factory successfully Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230154 4829 factory.go:153] Registering CRI-O factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230206 4829 factory.go:221] Registration of the crio container factory successfully Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.230962 4829 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.231067 4829 factory.go:103] Registering Raw factory Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.231109 4829 manager.go:1196] Started watching for new ooms in manager Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.232677 4829 manager.go:319] Starting recovery of all containers Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235517 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235604 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235632 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235649 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235668 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235695 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235723 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235742 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235798 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235817 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235832 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235849 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235861 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235875 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235887 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235940 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235950 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235963 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235975 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235986 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.235997 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236009 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236019 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236031 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236043 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236081 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236099 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236111 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236124 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236135 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236147 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236158 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236170 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236187 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236199 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236211 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236228 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236260 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236274 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236286 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236299 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236309 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236353 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236372 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236388 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236400 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236419 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236444 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236461 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236480 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236530 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236555 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236609 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236627 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236639 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236651 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236663 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236673 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236748 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236760 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236772 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236785 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236799 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236815 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236831 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236846 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236921 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236938 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236953 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.236995 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237011 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237027 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237044 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237069 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237140 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237155 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237172 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237195 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237212 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237228 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237244 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237287 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237327 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237355 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237380 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237397 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237414 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237432 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237449 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237470 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237487 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237536 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237558 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237597 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237615 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237634 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237652 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237668 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237685 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237700 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237724 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237749 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237780 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237800 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237828 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237849 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237865 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237882 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237919 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237937 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237958 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237974 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.237989 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238003 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238020 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238035 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238049 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238063 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238078 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238093 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238109 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238137 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238152 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238166 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238180 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238195 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238222 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238238 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238261 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238280 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238297 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238314 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238330 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238345 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238360 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238377 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238394 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238410 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238426 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238447 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238465 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238480 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238495 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238509 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238527 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238544 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238558 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238642 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238665 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238681 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238697 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238717 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238734 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238751 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238800 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238820 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238835 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238852 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238868 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238884 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238899 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238916 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238932 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238947 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238965 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.238980 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239004 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239028 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239051 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239075 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239091 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239108 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239123 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239139 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239156 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239171 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239186 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239202 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239218 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239235 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239255 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239273 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239290 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239312 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239329 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239356 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239369 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239388 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239400 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239417 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239430 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239451 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239463 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239476 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239490 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239504 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239519 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.239532 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241561 4829 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241623 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241644 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241662 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241680 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241696 4829 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241710 4829 reconstruct.go:97] "Volume reconstruction finished" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.241721 4829 reconciler.go:26] "Reconciler: start to sync state" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.260785 4829 manager.go:324] Recovery completed Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.270999 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272889 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.272932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273549 4829 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273584 4829 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.273620 4829 state_mem.go:36] "Initialized new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.275027 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.277996 4829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.278044 4829 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.278077 4829 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.278130 4829 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.279945 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.280007 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.295216 4829 policy_none.go:49] "None policy: Start" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.295966 4829 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.296003 4829 state_mem.go:35] "Initializing new in-memory state store" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.321678 4829 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367494 4829 manager.go:334] "Starting Device Plugin manager" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367598 4829 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.367629 4829 server.go:79] "Starting device plugin registration server" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368216 4829 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368242 4829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368445 4829 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368674 4829 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.368688 4829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.376418 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.378466 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.378550 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379767 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.379924 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.380229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.380279 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381376 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.381874 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382274 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382756 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382872 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.382910 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383723 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.383913 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.384037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.384083 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385314 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385671 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.385734 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.386984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.387085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.387171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.419275 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443834 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443914 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.443963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444209 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444363 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444384 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444660 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.444744 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.469163 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473545 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.473617 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.474279 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546514 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546614 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546736 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546853 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546882 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546798 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546893 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546935 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547036 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547090 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.546895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547110 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547179 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547230 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547299 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.547450 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.675302 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.677154 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.677896 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.720673 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.729830 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.747878 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.767522 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: I0217 15:54:48.774275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.796180 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5 WatchSource:0}: Error finding container 9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5: Status 404 returned error can't find the container with id 9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5 Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.799379 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4 WatchSource:0}: Error finding container 1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4: Status 404 returned error can't find the container with id 1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4 Feb 17 15:54:48 crc kubenswrapper[4829]: W0217 15:54:48.799734 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5 WatchSource:0}: Error finding container ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5: Status 404 returned error can't find the container with id ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5 Feb 17 15:54:48 crc kubenswrapper[4829]: E0217 15:54:48.820260 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.078091 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.080113 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.080701 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.210323 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.213420 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:00:37.136907366 +0000 UTC Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.242787 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.242883 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.282273 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0f34543a23695d40405f45f09ddde644d1ef2433fb7c8062037d25b86ea9e7f"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.284491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e9c15b71a83cf5df98c86d34420ad30fc01bb981f737de4838ba486f68f97ae3"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.285759 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ff228d8dbe6bd90c2861aceb274710d033dc6d9d68a7a456c3dbb9fd1a60adc5"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.286494 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1743a1744be9f9360a0b4153323921ba7873c4c65c18474344b6fd9764bdbdc4"} Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.289520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d4cb46703899a7e7d6ea62c450ff7a5e1cd1a3482517c690c22b086290ea6c5"} Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.354097 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.354191 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.356088 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.356234 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: W0217 15:54:49.560611 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.560728 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.621986 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.881517 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883499 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883516 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4829]: I0217 15:54:49.883552 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:49 crc kubenswrapper[4829]: E0217 15:54:49.884120 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.098559 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:50 crc kubenswrapper[4829]: E0217 15:54:50.099836 4829 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.210879 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.214279 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:24:09.864195874 +0000 UTC Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296467 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296560 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.296640 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.298124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.299869 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300192 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6175d8f1ddb2b12d6f0334a1d306f1e4f5ebdc17f9babe2309c0c4381e39463f" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6175d8f1ddb2b12d6f0334a1d306f1e4f5ebdc17f9babe2309c0c4381e39463f"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.300314 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.301555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304830 4829 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="51919905706fce2ad68f049f159ac6be0b6980eb772b0f9d152d06da8a0da5d1" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304944 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.304938 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"51919905706fce2ad68f049f159ac6be0b6980eb772b0f9d152d06da8a0da5d1"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.306697 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307039 4829 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a" exitCode=0 Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.307247 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309729 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.309797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315426 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315514 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.315547 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5"} Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.316222 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.317504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.317991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4829]: I0217 15:54:50.318012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.103704 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.103809 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.178993 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.179144 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.210761 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.215076 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:01:39.390360166 +0000 UTC Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.223539 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319486 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319491 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.319501 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.324414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328448 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.328471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.329913 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="19db8a23ef793b5e62f01237d70c305322e2d43ce7e2939ad74f9ec198bcd5c8" exitCode=0 Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.329984 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"19db8a23ef793b5e62f01237d70c305322e2d43ce7e2939ad74f9ec198bcd5c8"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.330019 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.330984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.331009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.331021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.332748 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.332976 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333122 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8d74bf8d41be2eefa7a295c997bbf74d4c0a9c2bed7c0e9bac416a32f4def0b4"} Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.333559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334107 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.334148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.484200 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4829]: I0217 15:54:51.485268 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.485926 4829 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 17 15:54:51 crc kubenswrapper[4829]: W0217 15:54:51.580056 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 17 15:54:51 crc kubenswrapper[4829]: E0217 15:54:51.580126 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.215533 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:02:46.940179913 +0000 UTC Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.339828 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.339816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208"} Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344433 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.344534 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347035 4829 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="75a3854d1046efad51952b098bedfdaa93df72ae94ae1b44638274a74ac7de02" exitCode=0 Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"75a3854d1046efad51952b098bedfdaa93df72ae94ae1b44638274a74ac7de02"} Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347198 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347208 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.347312 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.348289 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349175 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349195 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349188 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4829]: I0217 15:54:52.349273 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.017842 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.138322 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.216662 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:00:03.023438628 +0000 UTC Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355118 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67dafa73e86617a4a84472e9edfb211bac1507e70cc570b39baf4f1a1c65e262"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355176 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84495d17d891b56ac71d7ff0b1ac041a6ecee29dd0493bbfb1130821bc83e5ab"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355226 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355233 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"daac1c001811204e8b9d046e40005e780ba97d6cdc858404b5a36078b62973b3"} Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.355290 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356884 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4829]: I0217 15:54:53.356965 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.172184 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.217458 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:27:28.00322396 +0000 UTC Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bc697e5617d0dfcbb5aaf8b89ba0d526c05237f09023e5bcf4c4d2f254c64398"} Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366186 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66d512e305e59adc13db751b9e0f0f6dbd8c2279a190a066b6db715aab3a1d29"} Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366268 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366556 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.366789 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.367731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.368518 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.446295 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.447282 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.450901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.451077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.451214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.463068 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.686716 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689253 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.689313 4829 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:54 crc kubenswrapper[4829]: I0217 15:54:54.776181 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.218304 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:45:12.234755928 +0000 UTC Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.372983 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.373152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.373449 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374636 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374765 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.374898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.375073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.516361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 15:54:55 crc kubenswrapper[4829]: I0217 15:54:55.836000 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.218759 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:56:38.799013238 +0000 UTC Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.378695 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.378756 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.380991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4829]: I0217 15:54:56.381046 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.099438 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.219879 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:46:06.003072596 +0000 UTC Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.249766 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.249986 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.251678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.380915 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.380944 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.382954 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.383019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4829]: I0217 15:54:57.383037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.220557 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:54:01.010815514 +0000 UTC Feb 17 15:54:58 crc kubenswrapper[4829]: E0217 15:54:58.376694 4829 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.383451 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.384869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.836911 4829 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:54:58 crc kubenswrapper[4829]: I0217 15:54:58.837046 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.221348 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:08:44.712022243 +0000 UTC Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.905862 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.906033 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4829]: I0217 15:54:59.907688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4829]: I0217 15:55:00.222113 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:07:41.109595055 +0000 UTC Feb 17 15:55:01 crc kubenswrapper[4829]: I0217 15:55:01.222496 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:15:57.474350483 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.210855 4829 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.223527 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:26:47.475358304 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4829]: W0217 15:55:02.621556 4829 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.621724 4829 trace.go:236] Trace[528201369]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:52.619) (total time: 10001ms): Feb 17 15:55:02 crc kubenswrapper[4829]: Trace[528201369]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:55:02.621) Feb 17 15:55:02 crc kubenswrapper[4829]: Trace[528201369]: [10.001790051s] [10.001790051s] END Feb 17 15:55:02 crc kubenswrapper[4829]: E0217 15:55:02.621761 4829 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.944683 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.944764 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.952607 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 15:55:02 crc kubenswrapper[4829]: I0217 15:55:02.952701 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.026308 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]log ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]etcd ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-apiextensions-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/bootstrap-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 15:55:03 crc kubenswrapper[4829]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]autoregister-completion ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 15:55:03 crc kubenswrapper[4829]: livez check failed Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.026380 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:55:03 crc kubenswrapper[4829]: I0217 15:55:03.224660 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:30:50.891324652 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.225636 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:33:40.501232841 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.832304 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.833000 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835416 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.835437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4829]: I0217 15:55:04.852989 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.225897 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:19:46.526017778 +0000 UTC Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.404239 4829 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405603 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4829]: I0217 15:55:05.405623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4829]: I0217 15:55:06.226400 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:17:31.513112658 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.226756 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:15:21.282385537 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4829]: E0217 15:55:07.944983 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.948383 4829 trace.go:236] Trace[723915363]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:56.877) (total time: 11071ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[723915363]: ---"Objects listed" error: 11070ms (15:55:07.948) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[723915363]: [11.071010933s] [11.071010933s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.948421 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955048 4829 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955411 4829 trace.go:236] Trace[757981753]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:56.870) (total time: 11084ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[757981753]: ---"Objects listed" error: 11084ms (15:55:07.955) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[757981753]: [11.084903601s] [11.084903601s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.955444 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.959337 4829 trace.go:236] Trace[1247023109]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:55.050) (total time: 12908ms): Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[1247023109]: ---"Objects listed" error: 12908ms (15:55:07.959) Feb 17 15:55:07 crc kubenswrapper[4829]: Trace[1247023109]: [12.908994853s] [12.908994853s] END Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.959404 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.963223 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.964682 4829 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.965004 4829 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.966867 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:07 crc kubenswrapper[4829]: E0217 15:55:07.988247 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995558 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.995630 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.996402 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38916->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.996478 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38916->192.168.126.11:17697: read: connection reset by peer" Feb 17 15:55:07 crc kubenswrapper[4829]: I0217 15:55:07.997996 4829 csr.go:261] certificate signing request csr-8vjmq is approved, waiting to be issued Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.011093 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015758 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.015807 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.022002 4829 csr.go:257] certificate signing request csr-8vjmq is issued Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.033520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.034443 4829 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.034522 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.040440 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.055843 4829 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056106 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056123 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056190 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.056262 4829 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056316 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.056803 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.189513b312a3174e\": read tcp 38.102.83.173:36178->38.102.83.173:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.189513b312a3174e default 26179 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:54:48 +0000 UTC,LastTimestamp:2026-02-17 15:54:48.38264355 +0000 UTC m=+0.799661538,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067777 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.067852 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.085940 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092050 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092059 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.092086 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.108159 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.108272 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109926 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.109942 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.203506 4829 apiserver.go:52] "Watching apiserver" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.211038 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.211612 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212129 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212131 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.212929 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213124 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213164 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213621 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213620 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.213699 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.213879 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215061 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215420 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215523 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215614 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215632 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215839 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.215891 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.216983 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.218002 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.222938 4829 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.227008 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:38:49.074023485 +0000 UTC Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.249872 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256818 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257116 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257165 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257202 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.256887 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257308 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257333 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257357 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257387 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257408 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257424 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257439 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257455 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257496 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257516 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257534 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257644 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257676 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257704 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257727 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257748 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257786 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257826 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257842 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257860 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257925 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257942 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257957 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257975 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257997 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258022 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258088 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258113 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258195 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258253 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258275 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258300 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258351 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258397 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258420 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258456 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258482 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258528 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258598 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258622 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258648 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258680 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258703 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258730 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258754 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.257957 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258080 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258093 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258839 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258848 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258863 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258880 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258108 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258153 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258381 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258379 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258470 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258617 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258623 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258636 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258696 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258999 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258809 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259040 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259048 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.258848 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259184 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259209 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259239 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259245 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259260 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259294 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259316 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259340 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259404 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259425 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259448 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259470 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259494 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259515 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259535 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259559 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259599 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259624 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259644 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259693 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259734 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259758 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259782 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259805 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259827 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259875 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259936 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259980 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260004 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260077 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260155 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260179 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260200 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260262 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260296 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260322 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260343 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260365 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260389 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260414 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260435 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260456 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260479 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260526 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259310 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259395 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259414 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259440 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259444 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259564 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259646 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259719 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259818 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259924 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.259987 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260099 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260130 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260137 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260158 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260333 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260332 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260352 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260409 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260476 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260496 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260522 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.260628 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.76060681 +0000 UTC m=+21.177624788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261638 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261664 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261686 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261706 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261752 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261772 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261815 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261835 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261899 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261920 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261964 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261983 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262005 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262123 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262143 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262164 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262184 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262208 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262228 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262250 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262270 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262290 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262313 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262333 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262354 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262396 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262418 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262438 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262458 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262482 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262504 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262524 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262547 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262567 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262651 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262675 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262697 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262718 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262739 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262787 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262807 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262830 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262854 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262877 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262900 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262921 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262927 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262947 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262973 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.262998 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263021 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263044 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263088 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263109 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263115 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263152 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263169 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263195 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263213 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263231 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263288 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263342 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263365 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260721 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260952 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.260996 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261086 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261326 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261383 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261392 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.261446 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263715 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.264171 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.264272 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.263361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265636 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265663 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265666 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.265936 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266013 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266449 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266584 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266885 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.266975 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267070 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267624 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267671 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267726 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267868 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267923 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.267947 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268025 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268044 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268098 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268199 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268621 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268698 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.268736 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270733 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.270798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271163 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271367 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271434 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271687 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271871 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.271909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272208 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272551 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272515 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272814 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272800 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.272902 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273381 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273451 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.274528 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273627 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.273901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.284135 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.284233 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302205 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302294 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302328 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302356 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302408 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302483 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302530 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302554 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302594 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302618 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302643 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302713 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302728 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302740 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302753 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302765 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302777 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302790 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302816 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302829 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302842 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302854 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302866 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302879 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302891 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302903 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302916 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302929 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302941 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302953 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302965 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302977 4829 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.302989 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303013 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303026 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303039 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303058 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303070 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303082 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303094 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303107 4829 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303119 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303133 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303145 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303210 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303247 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303263 4829 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303277 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303290 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303303 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303316 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303329 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303341 4829 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303354 4829 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303367 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303380 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303393 4829 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303408 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303422 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303435 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303447 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303460 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303472 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303484 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303497 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303508 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303521 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303534 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303546 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303558 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303592 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303605 4829 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303617 4829 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.304308 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.304953 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305040 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305185 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305384 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305405 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305661 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306174 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306550 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306722 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.306885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308435 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.303628 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308897 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308910 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308920 4829 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308935 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308946 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308956 4829 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308965 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308976 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308986 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.308996 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309007 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309017 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309026 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309035 4829 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309045 4829 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309054 4829 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309063 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309073 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309083 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309091 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309100 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309109 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309118 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309126 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309138 4829 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309148 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309156 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309165 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309174 4829 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309182 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309192 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309201 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309210 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309219 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309228 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309237 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309246 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309255 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309272 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309283 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309292 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309300 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309311 4829 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309320 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309328 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309336 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309345 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309353 4829 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309361 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309371 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309379 4829 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309388 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309397 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309405 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309414 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309423 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309432 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309440 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309449 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309457 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309467 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309476 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309484 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309493 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309503 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309511 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309520 4829 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.309942 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310115 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310156 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310255 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.305403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.310679 4829 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311058 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311439 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311492 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311448 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311500 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311610 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311625 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311669 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.811651893 +0000 UTC m=+21.228669971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.311701 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.811681764 +0000 UTC m=+21.228699752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.311910 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312635 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312736 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.312758 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.313164 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.314032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.314205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317679 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.317775 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.318873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.319363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.320199 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.320944 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.323193 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.327815 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.327896 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.328443 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.328550 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.330044 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.330812 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.332691 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.332885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.334435 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.335476 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335801 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335821 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335836 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335921 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.8359006 +0000 UTC m=+21.252918578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335986 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.335998 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.336010 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.336042 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:08.836034883 +0000 UTC m=+21.253052861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.337642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.361619 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362126 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362343 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362371 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.362392 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.364482 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.364481 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.366140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.366820 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369005 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369255 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369261 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.369450 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.374342 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.377144 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.387026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.390519 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.400112 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.407030 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.409965 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410023 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410050 4829 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410060 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410070 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410082 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410090 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410098 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410106 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410114 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410160 4829 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410172 4829 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410182 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410192 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410201 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410210 4829 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410220 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410229 4829 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410276 4829 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410286 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410297 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410306 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410314 4829 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410322 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410331 4829 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410340 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410348 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410357 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410366 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410374 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410383 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410392 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410400 4829 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410409 4829 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410419 4829 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410428 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410436 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410446 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410456 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410464 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410473 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410481 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410490 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410498 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410507 4829 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410516 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410524 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410532 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410543 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410552 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410561 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410583 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410597 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410605 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410614 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410622 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410630 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410639 4829 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410647 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410657 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.410666 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.412230 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.413726 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" exitCode=255 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.418553 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.419178 4829 scope.go:117] "RemoveContainer" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.421330 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.428697 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.432196 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.437419 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.440696 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.445760 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.452244 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.452902 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.455564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.463286 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.464133 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.464909 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.465297 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.466976 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.467535 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.467762 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.474773 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.483476 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.493169 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.500835 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.511874 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.512831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.523973 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.529460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.532149 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.538383 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.539368 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.540155 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.541864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.542079 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.542535 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.543819 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.544566 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.545244 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.547733 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.550734 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a WatchSource:0}: Error finding container 15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a: Status 404 returned error can't find the container with id 15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.554440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.554536 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.555007 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.556228 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.556738 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.557940 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.558645 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.559141 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.560392 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.560994 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.561464 4829 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.562049 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: W0217 15:55:08.562718 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9 WatchSource:0}: Error finding container 6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9: Status 404 returned error can't find the container with id 6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9 Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.566854 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.568021 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.569794 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.572490 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.573598 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.574434 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.575484 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.576173 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.576632 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.577661 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.578662 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.579440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.580468 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.581106 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.581993 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.582906 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.583468 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.584302 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.584942 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.585431 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.586525 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.586989 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.587829 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.587863 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.612273 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.612298 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626828 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.626856 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734523 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.734547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.792191 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.805824 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.809168 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813632 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813731 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813754 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813739465 +0000 UTC m=+22.230757443 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.813776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813828 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813835 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813863 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813856698 +0000 UTC m=+22.230874676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.813874 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.813868788 +0000 UTC m=+22.230886766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.815615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.822322 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.835365 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.836925 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.844357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.852587 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.861676 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.871151 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.880843 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.890951 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.899674 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.908177 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.914361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.914391 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914498 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914513 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914524 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914540 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914567 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.914554806 +0000 UTC m=+22.331572784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914596 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914613 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: E0217 15:55:08.914681 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:09.914661569 +0000 UTC m=+22.331679617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.919699 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.928983 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.937886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.939164 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4829]: I0217 15:55:08.947876 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.023262 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:50:08 +0000 UTC, rotation deadline is 2026-12-02 15:06:57.896570702 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.023339 4829 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6911h11m48.873235342s for next certificate rotation Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.041266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.143756 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.227933 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:14:01.04823406 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246089 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.246113 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.278557 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.278750 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.302458 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-grnlx"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.302821 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.304540 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.304595 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.305049 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.322614 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.332948 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.340811 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.347640 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.354028 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.366013 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.376361 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.386701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.402260 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.413746 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.419737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6e09c9d19bf94b0b7ba1c3004ade50d2f6478f236cf0517b20501f5cb78b74f9"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421485 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421528 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.421542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"15b5d01064339ca9440803b873d0f2cd4381e6db64d24836a968647d20e3c86a"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.423840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.423868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1ed8cb51a32e4d7ef1dc86e7305df200f375ddb5084e7e7f512d68611ffa84ba"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.425902 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.428166 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.428193 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.436123 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.447307 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.449866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.458592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.470955 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.481873 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.497560 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.509392 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520167 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520261 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.520525 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-hosts-file\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.523085 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.535096 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.537146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccmvh\" (UniqueName: \"kubernetes.io/projected/9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf-kube-api-access-ccmvh\") pod \"node-resolver-grnlx\" (UID: \"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\") " pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.547805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551708 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.551768 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.560394 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.569637 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.580971 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.596303 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.611039 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.613214 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-grnlx" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.639167 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.653257 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.666952 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.685657 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.705657 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nhlmt"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.705942 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.706839 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-p9rjv"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.707303 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.707732 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.708136 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709443 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709709 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.709923 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.713281 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.714682 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fzwcw"] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.715565 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.716521 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718456 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.718550 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.732829 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.754987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.755057 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.759048 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.770684 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.785866 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.810143 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822202 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822295 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822312 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822362 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822389 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822402 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822415 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822563 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822539347 +0000 UTC m=+24.239557325 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822627 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822677 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822702 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822724 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822739 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822753 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822821 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822836 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822838 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822873 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822878 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822905 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822894967 +0000 UTC m=+24.239913025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.822928 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.822918408 +0000 UTC m=+24.239936386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822961 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822981 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.822995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.823010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.827067 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.844325 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858306 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858595 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.858662 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.872458 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.886461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.905747 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.921900 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924173 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924458 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924676 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924739 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924805 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924875 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925196 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-cnibin\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924710 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-etc-kubernetes\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-multus\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-os-release\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-binary-copy\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-kubelet\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924694 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-hostroot\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cnibin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-socket-dir-parent\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925031 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-system-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-os-release\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.924716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-multus-certs\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925111 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925614 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925643 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.925691 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.925674943 +0000 UTC m=+24.342692921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925137 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-system-cni-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.925992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926187 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926318 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926377 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926418 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-cni-binary-copy\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926087 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-k8s-cni-cncf-io\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-var-lib-cni-bin\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-conf-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926497 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-host-run-netns\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926686 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926749 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926813 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926966 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-rootfs\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.926993 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-mcd-auth-proxy-config\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-cni-dir\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927410 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d84d045f-af00-4d13-be03-8b03ad77f980-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.926662 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927477 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927491 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: E0217 15:55:09.927535 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:11.927520354 +0000 UTC m=+24.344538322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.927661 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d84d045f-af00-4d13-be03-8b03ad77f980-tuning-conf-dir\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.928022 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-multus-daemon-config\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.929723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-proxy-tls\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.936097 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.941342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfkr\" (UniqueName: \"kubernetes.io/projected/fbb42864-7e0c-40a9-a14a-5f4155ed0e94-kube-api-access-jdfkr\") pod \"machine-config-daemon-fzwcw\" (UID: \"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\") " pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.941529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545sp\" (UniqueName: \"kubernetes.io/projected/88e25bc5-0b59-4edf-a8f6-1a5a026155c4-kube-api-access-545sp\") pod \"multus-nhlmt\" (UID: \"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\") " pod="openshift-multus/multus-nhlmt" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.958488 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcg7\" (UniqueName: \"kubernetes.io/projected/d84d045f-af00-4d13-be03-8b03ad77f980-kube-api-access-4fcg7\") pod \"multus-additional-cni-plugins-p9rjv\" (UID: \"d84d045f-af00-4d13-be03-8b03ad77f980\") " pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960387 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.960699 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4829]: I0217 15:55:09.993721 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.018076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nhlmt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.024169 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.026918 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88e25bc5_0b59_4edf_a8f6_1a5a026155c4.slice/crio-a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26 WatchSource:0}: Error finding container a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26: Status 404 returned error can't find the container with id a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26 Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.029229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.035475 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.042368 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84d045f_af00_4d13_be03_8b03ad77f980.slice/crio-97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c WatchSource:0}: Error finding container 97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c: Status 404 returned error can't find the container with id 97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.048769 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57 WatchSource:0}: Error finding container 28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57: Status 404 returned error can't find the container with id 28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57 Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.063380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.081364 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.107091 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.108022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.112150 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.126039 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.145248 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165127 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.165383 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.185730 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.205296 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.225501 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229590 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:31:44.776835945 +0000 UTC Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229857 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229872 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229886 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.229953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230056 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230143 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230165 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230238 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230271 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.230288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.245193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.267956 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.279328 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.279393 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:10 crc kubenswrapper[4829]: E0217 15:55:10.279455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:10 crc kubenswrapper[4829]: E0217 15:55:10.279523 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.300232 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331147 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331188 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331282 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331298 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331374 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331393 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331401 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331586 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331608 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.331680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332093 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332178 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332652 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332742 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332612 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332707 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332594 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332238 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332724 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332476 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.332832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.334049 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.341653 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.360770 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"ovnkube-node-hjd7r\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.369876 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.394348 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.428157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.431397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grnlx" event={"ID":"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf","Type":"ContainerStarted","Data":"d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.431444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-grnlx" event={"ID":"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf","Type":"ContainerStarted","Data":"b35c1076d506b65cd7a9130098aa099a5128e53e681618b95f0d118dc6dbc9ca"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432659 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.432669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"28951c1c9b7adb81d636d4ae6d288e019172c035bbc480a3372b31873e032e57"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.433925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.433968 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"97c52f9d23cea0f7e37d6744bdc5f6bc02e96d69e5006a59acfa8e51d13cb73c"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.441637 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.441811 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"a4ca3a1e78fa7a281e1d0bf335f6604dd9047e78d8bf8306f3a60c71632b4e26"} Feb 17 15:55:10 crc kubenswrapper[4829]: W0217 15:55:10.445048 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad9f982_deda_446c_8801_dc47104eee62.slice/crio-24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e WatchSource:0}: Error finding container 24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e: Status 404 returned error can't find the container with id 24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.457487 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.471955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.472292 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.479388 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.513265 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.554689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574252 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.574332 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.594886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.634140 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.676343 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.677446 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.717561 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.760375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.778970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.779056 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.800503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.840565 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882297 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.882311 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.884021 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.918797 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.963710 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:10Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985540 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.985967 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4829]: I0217 15:55:10.986149 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.003931 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.089861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.090818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.091021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194220 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.194266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.230144 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:38:04.729879071 +0000 UTC Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.279170 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.279340 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.296936 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.296984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.297039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.400894 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.401074 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.458529 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.466869 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54" exitCode=0 Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.467125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473503 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12" exitCode=0 Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473771 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.473840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.483503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.503858 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.504643 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.530349 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.547747 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.565647 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.584853 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.599727 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.607853 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.627242 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.645547 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.668411 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.688824 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.707317 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.709985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.726656 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.739726 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.753701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.772289 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.785028 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.794706 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.807250 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.811927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.812266 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.818464 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.838002 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.853473 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853584 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853640 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.853626837 +0000 UTC m=+28.270644815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853699 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853733 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.85372678 +0000 UTC m=+28.270744758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.853839 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.853818132 +0000 UTC m=+28.270836110 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.886030 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915244 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915270 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.915279 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.922135 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.954259 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.954343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954449 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954471 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954478 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954488 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954494 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954500 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954559 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.954540581 +0000 UTC m=+28.371558559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: E0217 15:55:11.954593 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:15.954568512 +0000 UTC m=+28.371586490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:11 crc kubenswrapper[4829]: I0217 15:55:11.957952 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:11Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.004918 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017815 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.017853 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.038197 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.119968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120008 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.120030 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223220 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.223295 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.230678 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:09:17.353789243 +0000 UTC Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.278605 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:12 crc kubenswrapper[4829]: E0217 15:55:12.278789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.278824 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:12 crc kubenswrapper[4829]: E0217 15:55:12.278991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326356 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.326396 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429667 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.429802 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.479801 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1" exitCode=0 Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.479914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486107 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486224 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486269 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.486307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.505006 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.528034 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534711 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534736 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.534754 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.547809 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.569862 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.593491 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.612125 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638510 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638974 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.638984 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.658651 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.678624 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.695520 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.718817 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742801 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.742842 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.744273 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.764230 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:12Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846730 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846798 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.846861 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4829]: I0217 15:55:12.950870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.053391 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.156304 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.231446 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:30:31.156869772 +0000 UTC Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258335 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.258413 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.278717 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:13 crc kubenswrapper[4829]: E0217 15:55:13.278863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.361433 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464928 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.464967 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.492934 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb" exitCode=0 Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.492992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.512850 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.535857 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gbvgd"] Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.536448 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.538273 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.539421 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.539775 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.540211 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.542105 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.564090 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568712 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.568758 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.580061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.599347 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.614277 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.632965 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.646888 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.661091 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670618 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.670679 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671018 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.671104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.680187 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.692436 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.710175 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.736607 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.755626 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771910 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771783 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71cd8bd1-bb6a-405b-b23d-26c561d126d8-host\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.771996 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773288 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.773331 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.774940 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71cd8bd1-bb6a-405b-b23d-26c561d126d8-serviceca\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.783854 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.791564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vmz\" (UniqueName: \"kubernetes.io/projected/71cd8bd1-bb6a-405b-b23d-26c561d126d8-kube-api-access-77vmz\") pod \"node-ca-gbvgd\" (UID: \"71cd8bd1-bb6a-405b-b23d-26c561d126d8\") " pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.802052 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.818403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.832958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.845206 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.860925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gbvgd" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.862245 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.875957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.878875 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.903108 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.918182 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.932074 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.953381 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.979838 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4829]: I0217 15:55:13.983174 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:13Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.003491 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.085947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.085988 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086015 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.086044 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188362 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.188380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.232238 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:55:45.948537825 +0000 UTC Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.278409 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:14 crc kubenswrapper[4829]: E0217 15:55:14.278546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.279015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:14 crc kubenswrapper[4829]: E0217 15:55:14.279211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.291401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.394955 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.497873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.501512 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571" exitCode=0 Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.501612 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.511501 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.513158 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gbvgd" event={"ID":"71cd8bd1-bb6a-405b-b23d-26c561d126d8","Type":"ContainerStarted","Data":"26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.513205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gbvgd" event={"ID":"71cd8bd1-bb6a-405b-b23d-26c561d126d8","Type":"ContainerStarted","Data":"d5ea150b466124ab69dc34fd9ed80073b57ad7873cf729b51d0a997087244eb8"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.520316 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.536564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.552004 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.564026 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.576549 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.615929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.618808 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.659818 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.671095 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.685603 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.697609 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.715692 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719201 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719211 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.719235 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.727172 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.742043 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.752173 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.764929 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.774194 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.786888 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.796025 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.807618 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.815817 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.821125 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.826850 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.835640 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.846713 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.860658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.872599 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.884073 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.898638 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.917296 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923189 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4829]: I0217 15:55:14.923260 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.026114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128402 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.128438 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.230500 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.232653 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:52:41.793865438 +0000 UTC Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.278914 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.279048 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.333547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.436517 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.520263 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d" exitCode=0 Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.520333 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539460 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.539642 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.540066 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.556915 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.572724 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.581795 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.612662 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.624505 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.641148 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643518 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.643668 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.658566 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.679468 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.691777 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.705830 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.718960 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.733745 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747062 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.747154 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.749720 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:15Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.849554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850231 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.850275 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.891899 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892122 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892091526 +0000 UTC m=+36.309109534 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.892398 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.892458 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892626 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892669 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892658492 +0000 UTC m=+36.309676560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892766 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.892798 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.892789016 +0000 UTC m=+36.309806994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.952439 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.993864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4829]: I0217 15:55:15.994053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994007 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994100 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994113 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994158 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.994144032 +0000 UTC m=+36.411162010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994240 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994259 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994268 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:15 crc kubenswrapper[4829]: E0217 15:55:15.994302 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:23.994293676 +0000 UTC m=+36.411311654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054926 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054949 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.054958 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.157926 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.233760 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:09:39.689941749 +0000 UTC Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.260152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.278824 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.278923 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:16 crc kubenswrapper[4829]: E0217 15:55:16.278944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:16 crc kubenswrapper[4829]: E0217 15:55:16.279087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.361933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.361990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362006 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362028 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.362045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465668 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.465712 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.528382 4829 generic.go:334] "Generic (PLEG): container finished" podID="d84d045f-af00-4d13-be03-8b03ad77f980" containerID="ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3" exitCode=0 Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.528433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerDied","Data":"ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.544130 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.558434 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569801 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.569814 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.577923 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.588436 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.600562 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.611981 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.623688 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.636310 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.646737 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.658719 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.671488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.676154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.687098 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.697416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.709165 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:16Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.774200 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.877771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4829]: I0217 15:55:16.980928 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.087711 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.191496 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.234791 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:22:01.322823243 +0000 UTC Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.278568 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:17 crc kubenswrapper[4829]: E0217 15:55:17.279151 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295806 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.295845 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.398899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502467 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.502654 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.538011 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" event={"ID":"d84d045f-af00-4d13-be03-8b03ad77f980","Type":"ContainerStarted","Data":"3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.544953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545405 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545478 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.545504 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.569939 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.584492 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.584727 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.585446 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605697 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.605774 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.606514 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.624858 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.639298 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.652520 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.673406 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.686899 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.700871 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707950 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707967 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.707992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.708022 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.718859 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.737602 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.754044 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.775545 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.803145 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.809994 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.817999 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.831658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.848934 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.864533 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.882199 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.894812 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913421 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.913551 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.925272 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.941112 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.959315 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.976214 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4829]: I0217 15:55:17.988671 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.006081 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.016739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.036161 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.119159 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197209 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197310 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.197327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.215374 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221329 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.221387 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.236004 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:35:31.775002292 +0000 UTC Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.242560 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248950 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.248980 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.264769 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.270110 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.279000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.279013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.279149 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.279424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.288472 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.292912 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.296636 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.310194 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: E0217 15:55:18.310417 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312594 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.312668 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.315866 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.332264 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.352121 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.369425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.391633 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.410012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.414999 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.415020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.415034 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.426643 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.443184 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.460640 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.482215 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.504241 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521230 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.521372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.526086 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.545037 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.625866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728753 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.728794 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.832398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.935997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.936022 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4829]: I0217 15:55:18.936040 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039140 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.039184 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142148 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.142210 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.236677 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:10:51.115141822 +0000 UTC Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245742 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245760 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.245818 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.279176 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:19 crc kubenswrapper[4829]: E0217 15:55:19.279418 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347713 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.347753 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.450238 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552737 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.552807 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.620701 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667513 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667556 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.667602 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.770413 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.771828 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.872889 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4829]: I0217 15:55:19.975498 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.078873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.181931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.182071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.237644 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:26:33.051878035 +0000 UTC Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.279358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.279449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:20 crc kubenswrapper[4829]: E0217 15:55:20.279550 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:20 crc kubenswrapper[4829]: E0217 15:55:20.279673 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290965 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.290985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394807 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.394870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497521 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.497545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.558365 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.562922 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" exitCode=1 Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.562990 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.564069 4829 scope.go:117] "RemoveContainer" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.594127 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600226 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600288 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.600317 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.617014 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.634926 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.650917 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.670279 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.684519 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703824 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.703850 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.704924 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.718993 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.735002 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.751427 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.773892 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.794865 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.806973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.807002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.807025 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.815154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.848055 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.857161 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910145 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4829]: I0217 15:55:20.910194 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012238 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.012249 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113884 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.113915 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216740 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.216873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.238069 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:39:29.07447338 +0000 UTC Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.278790 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4829]: E0217 15:55:21.278961 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.319494 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.422187 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.525352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.525722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.526374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.528025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.528180 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.577376 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.581686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.582248 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.601433 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.621793 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632528 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.632547 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.644127 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.666942 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.682450 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.704271 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.720326 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735603 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735803 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.735971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.736108 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.740232 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.758746 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.778412 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.803518 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.827219 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.839973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.840001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.840022 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.854480 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.874839 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943219 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4829]: I0217 15:55:21.943236 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.045987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046079 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046099 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046125 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.046143 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149470 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.149533 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.239026 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:34:41.961089537 +0000 UTC Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.252825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253329 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.253823 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.278980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.279010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.279153 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.279245 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356243 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356322 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.356363 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.458928 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.562901 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.589681 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.591635 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/0.log" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596515 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" exitCode=1 Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596622 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.596703 4829 scope.go:117] "RemoveContainer" containerID="ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.597851 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:22 crc kubenswrapper[4829]: E0217 15:55:22.598173 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.636811 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.657132 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.666924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.667021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.667094 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.678423 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.686731 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5"] Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.687378 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.689657 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.689792 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.702846 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.721147 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.743563 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.768335 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.769931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.769994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770050 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.770075 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771125 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771205 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771600 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.771634 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.794339 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.812871 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.829681 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.850280 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.866410 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872485 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872564 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872614 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.872658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.874162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.874796 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/577908b4-4366-480b-974e-cee2a3ff74a7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.881243 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/577908b4-4366-480b-974e-cee2a3ff74a7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.887921 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.901613 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766kg\" (UniqueName: \"kubernetes.io/projected/577908b4-4366-480b-974e-cee2a3ff74a7-kube-api-access-766kg\") pod \"ovnkube-control-plane-749d76644c-jwdn5\" (UID: \"577908b4-4366-480b-974e-cee2a3ff74a7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.902826 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.918823 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.935701 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.951034 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.969517 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975443 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.975502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4829]: I0217 15:55:22.988933 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.005287 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.008718 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.015174 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.030494 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.045184 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.071169 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.080258 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.083132 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.096757 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.109549 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.120341 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.130613 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182845 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.182873 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.239318 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:05:31.065230588 +0000 UTC Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.279050 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.279250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.286238 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.392974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.393556 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.496939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.497311 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.497699 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.600813 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601130 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.601192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" event={"ID":"577908b4-4366-480b-974e-cee2a3ff74a7","Type":"ContainerStarted","Data":"9b8ff1f9d61395f337f02c8e72b0dd2435eda51bb32b697f6493af99b0f8fcf0"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.602630 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.605021 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.605155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.623084 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.636357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.648670 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.665655 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.679216 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.700902 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.704200 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.719816 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.740636 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.760302 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.780818 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.803255 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809816 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.809870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.827608 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.858235 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebffb92fcad070cc04f6e159a2cadadc4bb3fa5acf80eb0977309b8defe4ab22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:19Z\\\",\\\"message\\\":\\\".go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:19.940718 6108 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:19.940730 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:19.940744 6108 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:19.940751 6108 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:19.940802 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:19.940847 6108 factory.go:656] Stopping watch factory\\\\nI0217 15:55:19.940863 6108 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:19.940872 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 15:55:19.940880 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:19.940888 6108 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:19.940895 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:55:19.940901 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:19.940909 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:19.940935 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.879251 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.897114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.913390 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.916514 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.939322 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.961689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.977258 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.982966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.983115 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.983158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983254 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983214861 +0000 UTC m=+52.400232879 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983266 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983315 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983368 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983354105 +0000 UTC m=+52.400372113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: E0217 15:55:23.983487 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.983400806 +0000 UTC m=+52.400418814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:23 crc kubenswrapper[4829]: I0217 15:55:23.993763 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.011062 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.015724 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.025830 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.040982 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.061805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.079885 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.083941 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.084021 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084124 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084157 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084161 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084179 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084181 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084240 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084247 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:40.084224777 +0000 UTC m=+52.501242795 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.084341 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:40.08431749 +0000 UTC m=+52.501335468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.096180 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.114380 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118656 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118684 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.118702 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.128533 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.146234 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.160144 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.193759 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.194504 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.194630 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.210896 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221659 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.221724 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.230067 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.240038 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:40:10.460967743 +0000 UTC Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.245386 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.263507 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.276791 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.278513 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.278672 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.278770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.278887 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.285506 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.285611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.295413 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.307227 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.320637 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325043 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325118 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.325165 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.336061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.354066 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.372958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.386281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.386384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.386471 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.386567 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:24.886544168 +0000 UTC m=+37.303562146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.387043 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.404266 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.416079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mtt6\" (UniqueName: \"kubernetes.io/projected/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-kube-api-access-5mtt6\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.432689 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.443254 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.466236 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.485230 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535982 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.535993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.536031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.536042 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.639544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742485 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.742627 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.845840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.846100 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.918194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.918398 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: E0217 15:55:24.918514 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:25.918481402 +0000 UTC m=+38.335499420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4829]: I0217 15:55:24.949610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.052866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.156617 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.240174 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:36:08.325710653 +0000 UTC Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.259983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260092 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.260111 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.279358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.279547 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363297 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363356 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.363378 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466274 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466376 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.466394 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569645 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.569723 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672365 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672387 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.672403 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.775273 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.878199 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.927715 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.927895 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:25 crc kubenswrapper[4829]: E0217 15:55:25.927973 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:27.927951263 +0000 UTC m=+40.344969251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980411 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:25 crc kubenswrapper[4829]: I0217 15:55:25.980440 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:25Z","lastTransitionTime":"2026-02-17T15:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.083753 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187199 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.187283 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.241318 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:31:46.010139699 +0000 UTC Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.278807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.278959 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279168 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.279247 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:26 crc kubenswrapper[4829]: E0217 15:55:26.279473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290428 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.290472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394394 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.394412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.497984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.498005 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601502 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.601679 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.705446 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.809888 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:26 crc kubenswrapper[4829]: I0217 15:55:26.913128 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:26Z","lastTransitionTime":"2026-02-17T15:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016714 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.016899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.119530 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.223521 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.241652 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:01:19.823191777 +0000 UTC Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.256888 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.278362 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.278548 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.280357 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.299539 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.319727 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326980 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.326997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.327023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.327041 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.339667 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.360082 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.381142 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.397333 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.417247 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.431281 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.433723 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.450985 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.470905 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.491390 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.508548 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.530923 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.535892 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.563980 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.584444 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.641435 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.744979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.745149 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849438 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.849712 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.951202 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.951516 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:27 crc kubenswrapper[4829]: E0217 15:55:27.951655 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:31.951628226 +0000 UTC m=+44.368646234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:27 crc kubenswrapper[4829]: I0217 15:55:27.953544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:27Z","lastTransitionTime":"2026-02-17T15:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057167 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.057230 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.160417 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.242795 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:29:49.625230541 +0000 UTC Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.263951 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.278408 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.278492 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.279137 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.279500 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.279813 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.280260 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.329958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.343243 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.363502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.367881 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.377836 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.392114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.408894 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.425469 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.441943 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.454804 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.471644 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474086 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.474268 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.477466 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.500915 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.501242 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505414 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505427 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.505436 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.524375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.526686 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.530890 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.538704 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.548098 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.551958 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.556310 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.574725 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.575705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579828 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.579869 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.587061 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.591850 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:28 crc kubenswrapper[4829]: E0217 15:55:28.592071 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.594781 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.697650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801786 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801909 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.801932 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:28 crc kubenswrapper[4829]: I0217 15:55:28.905545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:28Z","lastTransitionTime":"2026-02-17T15:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008777 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.008899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.009077 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.111956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112069 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.112087 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215903 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.215921 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.243308 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:46:25.710771275 +0000 UTC Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.278340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:29 crc kubenswrapper[4829]: E0217 15:55:29.278549 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.319689 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422563 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.422615 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525600 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.525739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.628513 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731872 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.731995 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.732017 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.835968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.836002 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.836028 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:29 crc kubenswrapper[4829]: I0217 15:55:29.939764 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:29Z","lastTransitionTime":"2026-02-17T15:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043228 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.043308 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.146625 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.244186 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:24:31.194911638 +0000 UTC Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249598 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.249656 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278661 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278709 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.278737 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.278852 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.278977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:30 crc kubenswrapper[4829]: E0217 15:55:30.279284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352175 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.352231 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.459847 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.563485 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667259 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.667321 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.771929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.875502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:30 crc kubenswrapper[4829]: I0217 15:55:30.978450 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:30Z","lastTransitionTime":"2026-02-17T15:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.080627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.080979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.081532 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184831 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.184997 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.244746 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:28:37.548994436 +0000 UTC Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.278262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:31 crc kubenswrapper[4829]: E0217 15:55:31.278432 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288087 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.288132 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391913 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.391991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.392022 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.392039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.495253 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597949 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.597978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.598009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.598032 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700881 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.700989 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804305 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804359 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.804395 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.906993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.907018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.907039 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:31Z","lastTransitionTime":"2026-02-17T15:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:31 crc kubenswrapper[4829]: I0217 15:55:31.999686 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:31.999873 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:31.999987 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:39.999957261 +0000 UTC m=+52.416975279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009912 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.009929 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113533 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.113646 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217415 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217464 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.217480 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.245873 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:35:17.545244306 +0000 UTC Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.279520 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.279569 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.279766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.280069 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.280852 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:32 crc kubenswrapper[4829]: E0217 15:55:32.281154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.321299 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425416 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425513 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.425605 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.528879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.528943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.529072 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.633180 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736450 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736648 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.736734 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.840460 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944745 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:32 crc kubenswrapper[4829]: I0217 15:55:32.944763 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:32Z","lastTransitionTime":"2026-02-17T15:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.047916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.047991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.048050 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151125 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.151182 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.246442 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:16:26.263626287 +0000 UTC Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.253923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.254106 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.278491 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:33 crc kubenswrapper[4829]: E0217 15:55:33.278736 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.357759 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.464883 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.567982 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.568030 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.568047 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.671597 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775162 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.775290 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.879515 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983432 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983457 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:33 crc kubenswrapper[4829]: I0217 15:55:33.983475 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:33Z","lastTransitionTime":"2026-02-17T15:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.087972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.088240 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.191951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.191993 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.192025 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.247287 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:55:28.565877523 +0000 UTC Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278715 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278785 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.278898 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.278887 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.278980 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:34 crc kubenswrapper[4829]: E0217 15:55:34.279092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294516 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.294554 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398179 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.398221 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.502173 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605441 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.605486 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.708953 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.709123 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.812485 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915859 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:34 crc kubenswrapper[4829]: I0217 15:55:34.915912 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:34Z","lastTransitionTime":"2026-02-17T15:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.018959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.019071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122668 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.122786 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.225944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226026 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.226085 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.247694 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:27:48.229588399 +0000 UTC Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.279022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:35 crc kubenswrapper[4829]: E0217 15:55:35.279164 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328154 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328467 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.328488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434217 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434310 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.434364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.536851 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641114 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641191 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.641229 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.743992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.744110 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847455 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.847496 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950476 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950620 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:35 crc kubenswrapper[4829]: I0217 15:55:35.950644 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:35Z","lastTransitionTime":"2026-02-17T15:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053721 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.053764 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156152 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.156209 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.248681 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:37:45.030157773 +0000 UTC Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.259663 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278324 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.278488 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.278565 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.279093 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:36 crc kubenswrapper[4829]: E0217 15:55:36.279207 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.279393 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362702 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.362738 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465333 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.465391 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568951 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568981 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.568995 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.659237 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.662330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.662967 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671894 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.671974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.672000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.672021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.676233 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.689994 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.702891 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.715987 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.728754 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.743521 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.753985 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.768286 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775317 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.775326 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.785645 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.800375 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.813162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.839792 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.867428 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877395 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.877424 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.884671 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.896428 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.910189 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:36Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979120 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979129 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:36 crc kubenswrapper[4829]: I0217 15:55:36.979152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:36Z","lastTransitionTime":"2026-02-17T15:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081539 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.081563 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183948 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183974 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.183985 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.249335 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:25:50.856785763 +0000 UTC Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.279112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:37 crc kubenswrapper[4829]: E0217 15:55:37.279264 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286063 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.286108 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.388989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389141 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389160 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.389694 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492708 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.492771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595957 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.595977 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.668992 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.669858 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/1.log" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674019 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" exitCode=1 Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.674122 4829 scope.go:117] "RemoveContainer" containerID="bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.675806 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:37 crc kubenswrapper[4829]: E0217 15:55:37.677474 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.699871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.699961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700014 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.700058 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.701376 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.722010 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.741689 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.758894 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.778000 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.796705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.802759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.802945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803246 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.803350 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.813090 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.833513 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.852695 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.883004 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.902475 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906529 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906614 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.906640 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:37Z","lastTransitionTime":"2026-02-17T15:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.922497 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.941088 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.959748 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:37 crc kubenswrapper[4829]: I0217 15:55:37.981191 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:37.999970 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.009979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010029 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010058 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.010071 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.112904 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.216605 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.250390 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:18:32.983966754 +0000 UTC Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.278863 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.279044 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.279275 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.279655 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.279977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.280113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.301385 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.320314 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.321683 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.339955 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.356502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.374304 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.396182 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.414106 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.423826 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.434124 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.459018 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.495551 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcfb669bbd70856ff345201499319549e1ca85fb2c01eea73a057dc5d8ddc40d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"message\\\":\\\" 6269 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 15:55:21.510421 6269 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:55:21.510496 6269 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:55:21.510506 6269 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:55:21.510543 6269 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:55:21.510860 6269 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:55:21.510880 6269 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:55:21.512727 6269 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:55:21.512781 6269 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:55:21.512840 6269 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:55:21.512837 6269 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:55:21.512867 6269 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:55:21.512875 6269 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 15:55:21.512938 6269 factory.go:656] Stopping watch factory\\\\nI0217 15:55:21.512955 6269 ovnkube.go:599] Stopped ovnkube\\\\nI0217 15:55:21.512951 6269 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.517079 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.526938 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.534461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.547163 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.558202 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.569936 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.581393 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628979 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.628994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.629004 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.680467 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.686297 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:38 crc kubenswrapper[4829]: E0217 15:55:38.686467 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.720679 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731744 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.731789 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.740791 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.758000 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.775877 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.793340 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.818042 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.835774 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.838568 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.854938 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.870856 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.888154 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.903543 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.917208 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.932762 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.937981 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.938006 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.951221 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.970561 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.986071 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:38Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992227 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:38 crc kubenswrapper[4829]: I0217 15:55:38.992686 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:38Z","lastTransitionTime":"2026-02-17T15:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.012691 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.018455 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.038118 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.042992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.043017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.043035 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.063038 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067826 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.067867 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.086199 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091716 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.091772 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.110485 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:39Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.110880 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.112674 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.216377 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.251593 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:45:49.658052468 +0000 UTC Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.278958 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.279120 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.319994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.320109 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.423709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.424108 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.424142 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528107 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.528159 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631917 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.631993 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.734720 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838114 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.838190 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.941231 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:39Z","lastTransitionTime":"2026-02-17T15:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988356 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4829]: I0217 15:55:39.988692 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.988824 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.988901 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.988877606 +0000 UTC m=+84.405895614 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.989245 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.989226965 +0000 UTC m=+84.406244973 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.989719 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:39 crc kubenswrapper[4829]: E0217 15:55:39.990126 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:11.990044427 +0000 UTC m=+84.407062435 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044160 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044229 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044267 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.044319 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090184 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090450 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090502 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090523 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090457 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090622 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090638 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:12.090612962 +0000 UTC m=+84.507630970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.090687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090736 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:55:56.090715885 +0000 UTC m=+68.507733983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090790 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090817 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090837 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.090884 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:12.090867619 +0000 UTC m=+84.507885637 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.147343 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.229433 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.247360 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.247693 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250713 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.250943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.251199 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.251718 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:50:14.984706002 +0000 UTC Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.259417 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.277188 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279408 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.279602 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279653 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:40 crc kubenswrapper[4829]: E0217 15:55:40.279765 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.312339 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.329288 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.345685 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354311 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354374 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.354412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.366725 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.384858 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.398958 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.444437 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.462103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.462426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463182 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.463610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.472307 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.486840 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.499350 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.508891 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.518503 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.530114 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:40Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566591 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566636 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.566675 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669278 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.669398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772075 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.772139 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.875521 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.875868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.876303 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.979522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.979942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:40 crc kubenswrapper[4829]: I0217 15:55:40.980561 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:40Z","lastTransitionTime":"2026-02-17T15:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084408 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084451 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.084519 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187193 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187215 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.187267 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.252228 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:44:24.331860375 +0000 UTC Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.278862 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:41 crc kubenswrapper[4829]: E0217 15:55:41.279017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290095 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.290254 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.392981 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495277 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.495348 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597728 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597747 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597770 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.597787 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700676 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.700738 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805910 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805953 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.805967 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:41 crc kubenswrapper[4829]: I0217 15:55:41.910488 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:41Z","lastTransitionTime":"2026-02-17T15:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013410 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.013538 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115877 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.115990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.116009 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218484 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218608 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.218659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.253106 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:43:11.845140857 +0000 UTC Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278665 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278724 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.278747 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.278917 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.279054 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:42 crc kubenswrapper[4829]: E0217 15:55:42.279170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.322502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.425945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.426121 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528768 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528792 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528822 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.528841 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631869 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631895 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.631917 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.734749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838038 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.838165 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941505 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941616 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:42 crc kubenswrapper[4829]: I0217 15:55:42.941643 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:42Z","lastTransitionTime":"2026-02-17T15:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044529 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.044631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.148401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.250719 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.253803 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:20:14.422488803 +0000 UTC Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.279260 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:43 crc kubenswrapper[4829]: E0217 15:55:43.279441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.353693 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458200 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458224 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.458243 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.561533 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.663946 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.766958 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767048 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.767088 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869505 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.869652 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.972987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:43 crc kubenswrapper[4829]: I0217 15:55:43.973117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:43Z","lastTransitionTime":"2026-02-17T15:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076868 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.076888 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180300 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180403 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180431 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.180449 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.254645 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:19:09.033401846 +0000 UTC Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279368 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.279466 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279671 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279785 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:44 crc kubenswrapper[4829]: E0217 15:55:44.279978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283357 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283381 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.283398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386147 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.386191 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489673 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489844 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.489861 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594163 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.594223 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.697372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800905 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.800957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904655 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:44 crc kubenswrapper[4829]: I0217 15:55:44.904783 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:44Z","lastTransitionTime":"2026-02-17T15:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007930 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007946 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.007986 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111954 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.111995 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.215191 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.255705 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:47:46.304036461 +0000 UTC Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.279089 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:45 crc kubenswrapper[4829]: E0217 15:55:45.279279 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317875 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.317897 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420446 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.420493 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522725 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.522749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.626312 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729551 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729629 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.729647 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832007 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832131 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.832152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935726 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:45 crc kubenswrapper[4829]: I0217 15:55:45.935857 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:45Z","lastTransitionTime":"2026-02-17T15:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038545 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038669 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038696 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.038718 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141496 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141674 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.141697 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244842 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.244862 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.256286 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:05:42.852187954 +0000 UTC Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279142 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279270 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.279344 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:46 crc kubenswrapper[4829]: E0217 15:55:46.279754 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347856 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.347948 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.450973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451032 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.451073 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554122 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554199 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.554243 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656670 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656691 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.656708 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759626 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759682 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.759705 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862848 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862916 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.862980 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966707 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:46 crc kubenswrapper[4829]: I0217 15:55:46.966729 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:46Z","lastTransitionTime":"2026-02-17T15:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.070664 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.173983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.174210 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.256454 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:37:03.573425957 +0000 UTC Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.277947 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278070 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.278257 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:47 crc kubenswrapper[4829]: E0217 15:55:47.278422 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381426 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381562 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381622 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.381641 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485165 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.485314 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588200 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588248 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.588268 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692078 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692150 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692171 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.692219 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795025 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795103 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.795144 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898877 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:47 crc kubenswrapper[4829]: I0217 15:55:47.898944 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:47Z","lastTransitionTime":"2026-02-17T15:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002036 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.002154 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105164 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105189 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.105205 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.207943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208001 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.208058 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.256662 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:43:25.661975193 +0000 UTC Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279517 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279520 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.279527 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.279789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.279946 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:48 crc kubenswrapper[4829]: E0217 15:55:48.280292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.299137 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310503 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.310523 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.320948 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.338905 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.356993 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.386455 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.405386 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415331 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415379 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415395 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.415434 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.428014 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.443967 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.460935 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.480470 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.502402 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518504 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.518524 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.523523 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.542376 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.567996 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.600364 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622456 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.622610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.624502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.645681 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724659 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.724755 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827723 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827802 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.827856 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931424 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931447 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931479 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:48 crc kubenswrapper[4829]: I0217 15:55:48.931500 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:48Z","lastTransitionTime":"2026-02-17T15:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034561 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034627 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.034645 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138548 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.138678 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241807 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.241886 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.257516 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:12:47.251197412 +0000 UTC Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.278908 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.279062 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.344957 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.345122 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.397186 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398658 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.398892 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.420152 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.425930 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.450870 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456130 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.456779 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.476950 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.481857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482070 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482221 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482370 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.482497 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.502247 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507651 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507715 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507732 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.507776 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.527296 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:49 crc kubenswrapper[4829]: E0217 15:55:49.527523 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530295 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530320 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.530338 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633014 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633434 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633566 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.633757 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737221 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737318 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.737372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840734 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.840773 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:49 crc kubenswrapper[4829]: I0217 15:55:49.943427 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:49Z","lastTransitionTime":"2026-02-17T15:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.046819 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149642 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.149714 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252911 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.252938 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.258114 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:46:59.731259616 +0000 UTC Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278732 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278732 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.278804 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279712 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:50 crc kubenswrapper[4829]: E0217 15:55:50.279829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354602 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.354685 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457295 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457351 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.457376 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561368 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.561388 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664472 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664519 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.664558 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767527 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.767739 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870838 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870970 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.870991 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.974984 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975045 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:50 crc kubenswrapper[4829]: I0217 15:55:50.975114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:50Z","lastTransitionTime":"2026-02-17T15:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.077985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.078105 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181291 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.181401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.259414 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:36:54.308851347 +0000 UTC Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.279102 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:51 crc kubenswrapper[4829]: E0217 15:55:51.279859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.280153 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:55:51 crc kubenswrapper[4829]: E0217 15:55:51.280381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.284732 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.386869 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489783 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489829 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.489898 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592301 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592379 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.592417 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.694883 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.797722 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:51 crc kubenswrapper[4829]: I0217 15:55:51.900412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:51Z","lastTransitionTime":"2026-02-17T15:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002942 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.002987 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105799 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.105825 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207898 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.207940 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.260566 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:31:14.73664316 +0000 UTC Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279047 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279100 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.279210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:52 crc kubenswrapper[4829]: E0217 15:55:52.279562 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309838 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309880 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.309898 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412598 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412695 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.412749 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514919 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.514975 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617497 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617549 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617561 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617596 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.617610 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720652 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720742 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.720790 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823741 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823854 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823885 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.823907 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926168 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926181 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:52 crc kubenswrapper[4829]: I0217 15:55:52.926209 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:52Z","lastTransitionTime":"2026-02-17T15:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.029493 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.133461 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236292 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236344 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.236369 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.261432 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:55:48.770283694 +0000 UTC Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.278869 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:53 crc kubenswrapper[4829]: E0217 15:55:53.279094 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.338952 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339026 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.339062 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.441776 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544197 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.544292 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646746 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.646778 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.749364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852721 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852839 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852863 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.852880 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955811 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:53 crc kubenswrapper[4829]: I0217 15:55:53.955910 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:53Z","lastTransitionTime":"2026-02-17T15:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059117 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.059128 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161914 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161925 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161944 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.161957 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.262528 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:53:53.858339509 +0000 UTC Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264188 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264258 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264272 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264313 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.264348 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.278970 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.279037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279088 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.278891 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279251 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:54 crc kubenswrapper[4829]: E0217 15:55:54.279363 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367264 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367314 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367325 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367346 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.367359 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469791 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469866 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.469893 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.571959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.572055 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674653 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674664 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.674692 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.776956 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.777050 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879552 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879564 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.879589 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981406 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:54 crc kubenswrapper[4829]: I0217 15:55:54.981484 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:54Z","lastTransitionTime":"2026-02-17T15:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.083280 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186587 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.186598 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.263230 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:52:22.726916739 +0000 UTC Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:55 crc kubenswrapper[4829]: E0217 15:55:55.278997 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288962 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.288996 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.391986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.392045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494599 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494632 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494643 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.494650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596922 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.596943 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.699259 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801781 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.801866 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904386 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:55 crc kubenswrapper[4829]: I0217 15:55:55.904429 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:55Z","lastTransitionTime":"2026-02-17T15:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006927 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006983 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.006999 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.007024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.007061 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109511 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.109545 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.176478 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.176710 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.176804 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.176774252 +0000 UTC m=+100.593792270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212369 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212439 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.212479 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.264261 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:27:39.973387979 +0000 UTC Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278875 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.278801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.278917 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.279013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:56 crc kubenswrapper[4829]: E0217 15:55:56.279228 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315086 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315151 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.315168 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418362 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418373 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418390 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.418401 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520432 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520478 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.520517 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623349 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623363 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.623372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725731 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725792 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725832 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.725849 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828407 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828449 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.828464 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931892 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931909 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:56 crc kubenswrapper[4829]: I0217 15:55:56.931922 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:56Z","lastTransitionTime":"2026-02-17T15:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035692 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035752 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.035771 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138502 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138677 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.138757 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241474 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.241502 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.264952 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:20:49.14061922 +0000 UTC Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.279271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:57 crc kubenswrapper[4829]: E0217 15:55:57.279400 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344861 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344915 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.344954 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447241 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447298 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.447360 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550017 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550068 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.550112 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653262 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653275 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.653305 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751419 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751493 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" exitCode=1 Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.751543 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.752210 4829 scope.go:117] "RemoveContainer" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.756963 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.756989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757016 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.757026 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.766360 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.780078 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.795823 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.807615 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.819597 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.833139 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.843934 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.855430 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860006 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860081 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.860095 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.866343 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.877892 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.888805 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.900592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.912691 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.927592 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.941215 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.962933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963194 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963456 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:57Z","lastTransitionTime":"2026-02-17T15:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.963775 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:57 crc kubenswrapper[4829]: I0217 15:55:57.982564 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069519 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.069687 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173341 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173568 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.173994 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.174187 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.265940 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:14:21.885526191 +0000 UTC Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.277332 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.278745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.278872 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.279107 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.279201 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.279449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:55:58 crc kubenswrapper[4829]: E0217 15:55:58.279551 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.296088 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.306459 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.316831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.328813 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.341299 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.351728 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.367674 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.377467 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.380746 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.389848 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.402502 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.439007 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.450285 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.459859 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.477911 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.483651 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.507323 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.520459 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.537425 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586004 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586109 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.586118 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688411 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688492 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.688503 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.756665 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.756728 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.771277 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.783782 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791423 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791465 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.791519 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.795834 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.810965 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.833569 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.851461 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.864622 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.878012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.891845 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894358 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.894416 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.906912 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.917645 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.930049 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.948831 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.963056 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.977682 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.990521 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:58Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996609 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:58 crc kubenswrapper[4829]: I0217 15:55:58.996659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:58Z","lastTransitionTime":"2026-02-17T15:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.001799 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099541 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.099554 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202473 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202522 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202537 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.202546 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.266668 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:47:34.294881292 +0000 UTC Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.279092 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.279281 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306508 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306538 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306557 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.306566 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408948 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408973 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.408990 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511808 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511841 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.511877 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586400 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586446 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586459 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.586467 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.606379 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611730 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611784 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.611814 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.626359 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630149 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630242 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.630262 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.650599 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.657879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.658174 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.681835 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686121 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686232 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.686249 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.702458 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:59Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:59 crc kubenswrapper[4829]: E0217 15:55:59.702875 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705085 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705111 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.705123 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808372 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.808477 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911169 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911235 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:59 crc kubenswrapper[4829]: I0217 15:55:59.911307 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:59Z","lastTransitionTime":"2026-02-17T15:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014254 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.014298 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117587 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117604 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.117615 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219727 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219738 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219754 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.219765 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.267228 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:59:39.171855671 +0000 UTC Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278711 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.278848 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.278738 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.278949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:00 crc kubenswrapper[4829]: E0217 15:56:00.279127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323462 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.323555 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426477 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426525 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426551 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.426569 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.529937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530019 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530075 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.530094 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.632996 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633098 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.633114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736328 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736340 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.736363 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839124 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839174 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839190 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839216 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.839232 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941307 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941421 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941460 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:00 crc kubenswrapper[4829]: I0217 15:56:00.941482 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:00Z","lastTransitionTime":"2026-02-17T15:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044491 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.044515 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146533 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146597 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146607 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.146634 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.249991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250054 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.250114 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.267647 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:35:56.95879826 +0000 UTC Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.279054 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:01 crc kubenswrapper[4829]: E0217 15:56:01.279268 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353372 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353388 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.353400 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456084 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456118 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456128 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.456153 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558206 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.558284 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661049 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661108 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661127 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.661143 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763813 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763823 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.763851 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867004 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867047 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867059 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.867088 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969156 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:01 crc kubenswrapper[4829]: I0217 15:56:01.969168 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:01Z","lastTransitionTime":"2026-02-17T15:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072053 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072104 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072115 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.072144 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175138 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175159 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.175177 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.268491 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:58:45.95677128 +0000 UTC Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277989 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.277997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278010 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278194 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278335 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.278423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:02 crc kubenswrapper[4829]: E0217 15:56:02.278640 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384157 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384176 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384202 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.384225 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487812 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487830 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487853 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.487878 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597641 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597667 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597702 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.597725 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700567 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.700735 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804003 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804057 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804100 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.804117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906804 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906836 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:02 crc kubenswrapper[4829]: I0217 15:56:02.906859 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:02Z","lastTransitionTime":"2026-02-17T15:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009348 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009385 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009407 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.009416 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112512 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.112556 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.215985 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.216003 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.269191 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:58:47.101659077 +0000 UTC Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.278795 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:03 crc kubenswrapper[4829]: E0217 15:56:03.278991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318873 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318883 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318899 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.318908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421296 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421334 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421345 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.421371 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523558 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523633 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523662 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.523677 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625893 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625905 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.625936 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729463 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729480 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729509 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.729531 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.833422 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936821 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936846 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:03 crc kubenswrapper[4829]: I0217 15:56:03.936899 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:03Z","lastTransitionTime":"2026-02-17T15:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040553 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040663 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040686 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.040703 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143827 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.143848 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.246945 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.246996 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247037 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.247053 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.269771 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:42:29.346090317 +0000 UTC Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278369 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.278517 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.278649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.278822 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:04 crc kubenswrapper[4829]: E0217 15:56:04.279039 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350397 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350453 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.350480 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453629 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453646 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.453658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557596 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557615 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557639 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.557656 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660882 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.660916 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764707 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764743 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764751 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.764775 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868198 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868270 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868286 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868309 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.868327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971566 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971743 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:04 crc kubenswrapper[4829]: I0217 15:56:04.971798 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:04Z","lastTransitionTime":"2026-02-17T15:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075276 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075304 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.075327 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178621 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178739 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178755 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.178790 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.270043 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:11:12.855564434 +0000 UTC Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.278514 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:05 crc kubenswrapper[4829]: E0217 15:56:05.278640 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281195 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281226 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281245 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.281254 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.384959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385044 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385066 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.385129 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488285 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.488414 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.590955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591035 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.591101 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693638 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693680 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.693700 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796287 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.796445 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899797 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899881 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:05 crc kubenswrapper[4829]: I0217 15:56:05.899924 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:05Z","lastTransitionTime":"2026-02-17T15:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003196 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003269 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003322 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.003344 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106834 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106857 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106887 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.106908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.209972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210071 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210087 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.210129 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.270472 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:43:16.270632701 +0000 UTC Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.279076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.279949 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.280110 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.280255 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.280690 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.281286 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:06 crc kubenswrapper[4829]: E0217 15:56:06.281453 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.312937 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313028 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313093 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.313121 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416308 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.416412 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.519998 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520043 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520072 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.520083 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623060 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623077 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623102 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.623120 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726694 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726809 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.726826 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.785952 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.789103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.789736 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.806539 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.822331 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829849 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829886 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829919 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.829934 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.844797 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.860506 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.881072 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.899086 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.917782 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.931972 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932041 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.932052 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:06Z","lastTransitionTime":"2026-02-17T15:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.934377 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.954136 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.979679 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:06 crc kubenswrapper[4829]: I0217 15:56:06.996235 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.011416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.030663 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036203 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036266 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036284 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036312 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.036336 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.056705 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.072325 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.090615 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.107195 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139391 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139486 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.139498 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.242891 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243180 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.243237 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.271049 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:07:23.996740063 +0000 UTC Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.278392 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:07 crc kubenswrapper[4829]: E0217 15:56:07.278638 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345717 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345766 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345782 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.345792 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449101 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449126 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.449188 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552520 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552589 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552644 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.552658 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656205 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.656278 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.758998 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759055 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759110 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.759134 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.795498 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.796408 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/2.log" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800118 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" exitCode=1 Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.800239 4829 scope.go:117] "RemoveContainer" containerID="f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.801184 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:07 crc kubenswrapper[4829]: E0217 15:56:07.801438 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.833772 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.860367 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861384 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861441 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.861472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.878773 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.895756 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.912721 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.927734 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.949556 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964665 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964733 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964759 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964789 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.964812 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:07Z","lastTransitionTime":"2026-02-17T15:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.967788 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:07 crc kubenswrapper[4829]: I0217 15:56:07.990785 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.011543 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.030826 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.046408 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.061012 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067240 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067321 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.067392 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.077856 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.098373 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.115563 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.134741 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170675 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170687 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.170723 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.272082 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:53:45.67667506 +0000 UTC Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274720 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274762 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274773 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.274799 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279298 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279394 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.279654 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.279811 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.300668 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.318377 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.339162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.359391 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5900f72df5ce5e50cad6e82b7613cb56d1dc4a24fb83eb0d943459c8a015f4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:37Z\\\",\\\"message\\\":\\\"Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:37.247764 6468 services_controller.go:452] Built service openshift-network-console/networking-console-plugin per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247777 6468 services_controller.go:453] Built service openshift-network-console/networking-console-plugin template LB for network=default: []services.LB{}\\\\nI0217 15:55:37.247779 6468 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}\\\\nI0217 15:55:37.247787 6468 services_controller.go:360] Finished syncing service community-operators on namespace openshift-marketplace for network=default : 790.392µs\\\\nI0217 15:55:37.247791 6468 services_controller.go:454] Service openshift-network-console/networking-console-plugin for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0217 15:55:37.247594 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.376335 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378399 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378420 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378454 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.378472 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.396708 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.414416 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.427156 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.444726 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.461809 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.477793 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480810 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480835 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480867 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.480891 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.493486 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.514737 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.531812 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.548764 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.567847 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584246 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584315 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584366 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.584384 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.587806 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687498 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687556 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687597 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687619 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.687636 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791082 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791617 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791654 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.791673 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.807364 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.813434 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:08 crc kubenswrapper[4829]: E0217 15:56:08.813920 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.834989 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2af2d606-28d2-485f-a755-6a525fdbfcf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 15:55:01.866175 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:55:01.868416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2796798527/tls.crt::/tmp/serving-cert-2796798527/tls.key\\\\\\\"\\\\nI0217 15:55:07.962182 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:55:07.970442 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:55:07.970482 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:55:07.970522 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:55:07.970534 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:55:07.982678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:55:07.982716 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:55:07.982735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:55:07.982742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:55:07.982748 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:55:07.982754 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:55:07.982989 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:55:07.985611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.849959 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"577908b4-4366-480b-974e-cee2a3ff74a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://472ce8ac9abd65068e80bc0fbb474b41b8be4bf4c9de075f98de441de218d743\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2477f971db76c411a917c453adb494ab65c9f1ee22cd56b13c1f478ca55d7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-766kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwdn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.862334 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a58e037-3472-4502-8724-182a196134bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c7f7262e73929f7522060b00614225bf780992d8e56175594a9a93e8555499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41fd513659f94d0f32fee86ca657fbadb963bcf8b90a61fe0376a75d9da2380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89eb0f13411389071a78e66f6c6f530d6d3b33a4ec6996e89904036eb9446eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.884621 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770c7078919536e6fac17673ab2f179d6acceadde6b1e315180de0c438bd6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894442 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894712 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894843 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.894943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.895027 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:08Z","lastTransitionTime":"2026-02-17T15:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.900167 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fbb42864-7e0c-40a9-a14a-5f4155ed0e94\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://247ac364ae0b985ed8617fbcd1571dd20cd3202e4daac066c217e254e34ea1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdfkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fzwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.913886 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"633df93b-8492-4bb1-bc9a-3ccd3185fe63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ba3eaf2bfcf9a4e702ad222b438ba7d67166a7193ee3093e0863afb66361081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://604a8fbf4b2e516b32a2b875ecf915fa72a816094ff52727be41e83e41d44019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e75f1b103a7a987a69d7e5aa7f3d4f6ef214b686a93df98799bacfb4a80dcf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cd224ec769ffbe08cf027c5b4f26943be41499d1e8daf66ee8b825de20cfc8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.925907 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.936658 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-grnlx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e1b1db2-9b2f-4bdb-acc2-b99e5e87e3bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0a93ca111b77dd70ef95c23e471ab588371ec976df7b6a8958b524579bc63c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccmvh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-grnlx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.953179 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nhlmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88e25bc5-0b59-4edf-a8f6-1a5a026155c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:56Z\\\",\\\"message\\\":\\\"2026-02-17T15:55:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4\\\\n2026-02-17T15:55:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_863546aa-8853-43a0-96b5-bc0af2a795d4 to /host/opt/cni/bin/\\\\n2026-02-17T15:55:11Z [verbose] multus-daemon started\\\\n2026-02-17T15:55:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-545sp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nhlmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.962400 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gbvgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cd8bd1-bb6a-405b-b23d-26c561d126d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26d7e9c0d3e65193f4f3d7b2da290e25ff08c3d03c9705dac296b51432efbafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-77vmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gbvgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.972226 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xdb29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mtt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xdb29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:08 crc kubenswrapper[4829]: I0217 15:56:08.989011 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:08Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006870 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.006990 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.007146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.007272 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.008130 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.021206 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1951359eece6210a07311848fb9ae0d9a286c63f814ff9eb0e14a11d23aeea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d43f0f726950504f371270c043cad400af3b832e7ca423a3af8f3d02810adda6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.037181 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e3d3c1be2427f2db0e405c4fb19bff4583ef5c39aaf93a2efedefbbef0c2fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.052403 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d84d045f-af00-4d13-be03-8b03ad77f980\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c561c0e861815a3f8f4555e99b606b9bd6476768ce3b5aacfa53ffb3de70688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549bdda90d808169e0b9d2472f1a798f6b9a2a50869487c858b481b1d0531f54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af54c04330f81fb06e293020e24bcac26a4e315e943a9359d61e689fb419c1d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eb5e80b41941fc4df3e95ae0c49601c2b8ea3fa5360553011e8321a66c443cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://041d964abb6417b60840c514acbd15225ab9d66211fb62eefa84fa1adb769571\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef59f147469d34f5421bc5da1fe6094bee925f42b946e976b8b4b512dedc781d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1c48712608a43fcd5c522d47a1897b7c193171c60f4a0ff6e65bc8f22dcfd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4fcg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-p9rjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.076162 4829 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fad9f982-deda-446c-8801-dc47104eee62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:56:07Z\\\",\\\"message\\\":\\\"lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:56:07.342043 6861 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.342049 6861 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:56:07.341923 6861 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:56:07.341790 6861 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}\\\\nI0217 15:56:07.342110 6861 services_controller.go:360] Finished syncing service metrics on namespace openshift-ingress-operator for network=default : 5.185229ms\\\\nF0217 15:56:07.342115 6861 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:56:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbqk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:55:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjd7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111177 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111222 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111251 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.111261 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214612 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214630 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214657 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.214673 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.272549 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:36:28.910815691 +0000 UTC Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.278980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.279157 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317471 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317535 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317554 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317605 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.317623 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421023 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421089 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421105 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421132 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.421152 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524306 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524327 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.524372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627620 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627671 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627688 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627711 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.627752 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730326 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730338 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730353 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.730366 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834281 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834337 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834350 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834371 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.834382 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847324 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847367 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847378 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847396 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.847407 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.861193 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864757 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.864832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.875731 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878735 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.878759 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.888621 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892591 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892623 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892634 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.892659 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.909523 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913493 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913524 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913536 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913581 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.913596 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.929557 4829 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:56:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e093bc13-e732-4259-b0a8-2325e80c34f5\\\",\\\"systemUUID\\\":\\\"420e9fca-55f5-42fc-a60a-919d603b95e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:56:09Z is after 2025-08-24T17:21:41Z" Feb 17 15:56:09 crc kubenswrapper[4829]: E0217 15:56:09.929900 4829 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936678 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936768 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:09 crc kubenswrapper[4829]: I0217 15:56:09.936816 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:09Z","lastTransitionTime":"2026-02-17T15:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.038997 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039031 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.039059 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142170 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142294 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.142318 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245146 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245178 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.245228 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.273645 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:01:14.850278956 +0000 UTC Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.279295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279442 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.279469 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.279889 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:10 crc kubenswrapper[4829]: E0217 15:56:10.280229 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.294851 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348330 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.348359 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450878 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.450962 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553204 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553253 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553264 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553283 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.553297 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655412 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655475 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.655484 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757776 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757847 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757865 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.757908 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860437 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860550 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.860604 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963239 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963256 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963280 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:10 crc kubenswrapper[4829]: I0217 15:56:10.963296 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:10Z","lastTransitionTime":"2026-02-17T15:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065647 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065666 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065690 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.065708 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168798 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168862 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168879 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168904 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.168921 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271137 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271213 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271236 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.271253 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.274425 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:09:15.617018177 +0000 UTC Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:11 crc kubenswrapper[4829]: E0217 15:56:11.279000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373864 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373955 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.373977 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.374003 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.374021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477466 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477526 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477543 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477569 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.477620 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580401 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580469 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580514 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.580534 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683021 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683061 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683069 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.683092 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786096 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786112 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786134 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.786153 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888143 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888161 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888184 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.888201 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991377 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991439 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991488 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:11 crc kubenswrapper[4829]: I0217 15:56:11.991508 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:11Z","lastTransitionTime":"2026-02-17T15:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.057813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058027 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.057988318 +0000 UTC m=+148.475006326 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.058165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.058225 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058374 4829 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058396 4829 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058468 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.058454089 +0000 UTC m=+148.475472097 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.058530 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.05847919 +0000 UTC m=+148.475497208 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094693 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094769 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094787 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.094831 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.160128 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.160242 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160431 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160473 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160472 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160485 4829 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160506 4829 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160524 4829 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160555 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.160536856 +0000 UTC m=+148.577554834 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.160631 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.160602318 +0000 UTC m=+148.577620326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198271 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198354 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198380 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.198398 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.274568 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:24:28.213750051 +0000 UTC Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279232 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279328 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279462 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.279340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279623 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:12 crc kubenswrapper[4829]: E0217 15:56:12.279774 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301094 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301123 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.301139 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403494 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403606 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403628 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403655 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.403672 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506394 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506436 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506449 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506489 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.506505 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609799 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609901 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609933 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.609955 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713158 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713225 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713249 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.713267 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816852 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816920 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816939 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.816990 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919832 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919851 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:12 crc kubenswrapper[4829]: I0217 15:56:12.919891 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:12Z","lastTransitionTime":"2026-02-17T15:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022795 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.022838 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126825 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126906 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126938 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.126992 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230185 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230268 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230299 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.230320 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.275634 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:24:28.998641188 +0000 UTC Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.279000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:13 crc kubenswrapper[4829]: E0217 15:56:13.279185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334135 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334207 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334265 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.334287 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437705 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437756 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437774 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437796 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.437815 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.540832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643393 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643409 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643433 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.643448 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.746964 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.747082 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.849968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850051 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850076 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.850093 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953679 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953761 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953785 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:13 crc kubenswrapper[4829]: I0217 15:56:13.953836 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:13Z","lastTransitionTime":"2026-02-17T15:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057219 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057293 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057352 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.057379 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160482 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160565 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160613 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.160631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265547 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265637 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265660 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265689 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.265707 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.275857 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:21:45.806735462 +0000 UTC Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279218 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279276 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.279502 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279731 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:14 crc kubenswrapper[4829]: E0217 15:56:14.279856 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368360 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368418 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368435 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368458 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.368475 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471435 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471492 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471510 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471532 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.471549 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574698 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574764 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574819 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574850 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.574870 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678650 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678709 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678747 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678775 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.678793 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781546 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781611 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781624 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781640 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.781650 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884208 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884347 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.884364 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987422 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987496 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987515 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:14 crc kubenswrapper[4829]: I0217 15:56:14.987562 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:14Z","lastTransitionTime":"2026-02-17T15:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090793 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090874 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090897 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090923 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.090942 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193918 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193934 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.193978 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.276899 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:10:43.830143755 +0000 UTC Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.279214 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:15 crc kubenswrapper[4829]: E0217 15:56:15.279382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296230 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296302 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296319 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296342 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.296361 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399383 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399461 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399483 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399507 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.399527 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501706 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501790 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501814 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501840 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.501859 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.604961 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605092 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605113 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605139 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.605157 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708052 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708119 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708142 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708173 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.708196 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811164 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811218 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811234 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811257 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.811275 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915971 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.915992 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.916009 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:15 crc kubenswrapper[4829]: I0217 15:56:15.916021 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:15Z","lastTransitionTime":"2026-02-17T15:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019024 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019067 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019080 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019100 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.019111 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.121978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122042 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122065 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.122083 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224303 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224364 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224382 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224405 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.224424 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.277239 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:36:46.063137704 +0000 UTC Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278604 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.278777 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.278847 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.278951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:16 crc kubenswrapper[4829]: E0217 15:56:16.279066 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327144 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327212 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327233 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327263 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.327286 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430147 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430237 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430255 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430279 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.430297 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532907 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532946 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.532961 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636444 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636788 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636800 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636818 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.636830 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739719 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739837 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.739860 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841932 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841975 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.841991 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.842012 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.842029 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945073 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945090 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945116 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:16 crc kubenswrapper[4829]: I0217 15:56:16.945136 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:16Z","lastTransitionTime":"2026-02-17T15:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047718 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047779 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047794 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.047832 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150192 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150247 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150260 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150282 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.150295 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.251978 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252011 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252020 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252033 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.252041 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.277611 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:10:09.00921764 +0000 UTC Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.278976 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:17 crc kubenswrapper[4829]: E0217 15:56:17.279304 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.298209 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355517 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355672 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355703 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.355721 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.458941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459018 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459039 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459064 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.459082 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562871 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562924 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562935 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562960 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.562975 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666531 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666555 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666631 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.666660 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.770888 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.770976 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771000 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771027 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.771045 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874333 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874417 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874444 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874481 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.874504 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978106 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978187 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978214 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978251 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:17 crc kubenswrapper[4829]: I0217 15:56:17.978273 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:17Z","lastTransitionTime":"2026-02-17T15:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081389 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081448 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081465 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081490 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.081510 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184820 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184890 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184908 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184936 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.184953 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278654 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:39:40.198448928 +0000 UTC Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278803 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.278933 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279063 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.279123 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279349 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:18 crc kubenswrapper[4829]: E0217 15:56:18.279465 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.293986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294056 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294074 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294097 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.294117 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.313115 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.313032474 podStartE2EDuration="38.313032474s" podCreationTimestamp="2026-02-17 15:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.310521392 +0000 UTC m=+90.727539380" watchObservedRunningTime="2026-02-17 15:56:18.313032474 +0000 UTC m=+90.730050492" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.367416 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nhlmt" podStartSLOduration=69.367358644 podStartE2EDuration="1m9.367358644s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.367069517 +0000 UTC m=+90.784087535" watchObservedRunningTime="2026-02-17 15:56:18.367358644 +0000 UTC m=+90.784376652" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.368067 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-grnlx" podStartSLOduration=71.36805664 podStartE2EDuration="1m11.36805664s" podCreationTimestamp="2026-02-17 15:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.345936076 +0000 UTC m=+90.762954084" watchObservedRunningTime="2026-02-17 15:56:18.36805664 +0000 UTC m=+90.785074658" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.383705 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gbvgd" podStartSLOduration=70.383684037 podStartE2EDuration="1m10.383684037s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.383058531 +0000 UTC m=+90.800076539" watchObservedRunningTime="2026-02-17 15:56:18.383684037 +0000 UTC m=+90.800702045" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396343 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396440 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396506 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396530 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.396612 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499858 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499896 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499911 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499929 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.499946 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.551688 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-p9rjv" podStartSLOduration=69.551665708 podStartE2EDuration="1m9.551665708s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.528270681 +0000 UTC m=+90.945288679" watchObservedRunningTime="2026-02-17 15:56:18.551665708 +0000 UTC m=+90.968683696" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.590191 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.590171907 podStartE2EDuration="1.590171907s" podCreationTimestamp="2026-02-17 15:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.562800613 +0000 UTC m=+90.979818601" watchObservedRunningTime="2026-02-17 15:56:18.590171907 +0000 UTC m=+91.007189895" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601772 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601805 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601817 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601833 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.601845 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.610529 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.610510559 podStartE2EDuration="1m10.610510559s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.608743835 +0000 UTC m=+91.025761823" watchObservedRunningTime="2026-02-17 15:56:18.610510559 +0000 UTC m=+91.027528547" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.611205 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.611200206 podStartE2EDuration="8.611200206s" podCreationTimestamp="2026-02-17 15:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.58943041 +0000 UTC m=+91.006448398" watchObservedRunningTime="2026-02-17 15:56:18.611200206 +0000 UTC m=+91.028218194" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.625169 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwdn5" podStartSLOduration=69.62515375 podStartE2EDuration="1m9.62515375s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.625078598 +0000 UTC m=+91.042096616" watchObservedRunningTime="2026-02-17 15:56:18.62515375 +0000 UTC m=+91.042171738" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.645710 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.645689067 podStartE2EDuration="1m10.645689067s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.643986075 +0000 UTC m=+91.061004093" watchObservedRunningTime="2026-02-17 15:56:18.645689067 +0000 UTC m=+91.062707085" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.677220 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podStartSLOduration=69.677199563 podStartE2EDuration="1m9.677199563s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:18.676511607 +0000 UTC m=+91.093529625" watchObservedRunningTime="2026-02-17 15:56:18.677199563 +0000 UTC m=+91.094217581" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704635 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704681 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704700 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.704741 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807763 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807876 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807902 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807931 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.807952 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911250 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911336 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911361 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:18 crc kubenswrapper[4829]: I0217 15:56:18.911380 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:18Z","lastTransitionTime":"2026-02-17T15:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014487 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014542 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014559 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014610 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.014631 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117642 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117704 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117722 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117748 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.117772 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220900 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220968 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.220986 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.221013 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.221033 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.278893 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:26:07.337649599 +0000 UTC Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.279158 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:19 crc kubenswrapper[4829]: E0217 15:56:19.279618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.323976 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324040 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324058 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324083 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.324101 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428855 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428943 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.428969 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.429005 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.429029 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532452 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532500 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532528 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.532544 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636289 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636375 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636398 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636429 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.636452 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740261 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740316 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740332 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740355 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.740372 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843323 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843404 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843425 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843457 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.843481 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946649 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946724 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946749 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946780 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:19 crc kubenswrapper[4829]: I0217 15:56:19.946802 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:19Z","lastTransitionTime":"2026-02-17T15:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049560 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049661 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049683 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049710 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.049727 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:20Z","lastTransitionTime":"2026-02-17T15:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112860 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112941 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112959 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.112987 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.113010 4829 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:56:20Z","lastTransitionTime":"2026-02-17T15:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.179087 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6"] Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.179680 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.182672 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.182953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.183499 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.184507 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260837 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260944 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.260979 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.261012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279145 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:55:36.60181095 +0000 UTC Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279203 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279338 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279416 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.279471 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.279642 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.279934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:20 crc kubenswrapper[4829]: E0217 15:56:20.280047 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.289765 4829 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363120 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363273 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363338 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.363519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.365409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.375497 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.399211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-844h6\" (UID: \"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.508598 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" Feb 17 15:56:20 crc kubenswrapper[4829]: W0217 15:56:20.542927 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8cdf0f_945d_4110_9a3c_0c9aa337ae6b.slice/crio-528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9 WatchSource:0}: Error finding container 528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9: Status 404 returned error can't find the container with id 528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9 Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.856506 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" event={"ID":"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b","Type":"ContainerStarted","Data":"87e211cb02d5fa35f00618453223aa1f786622d3e8c1a06d7bea493776bce94d"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.856641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" event={"ID":"3e8cdf0f-945d-4110-9a3c-0c9aa337ae6b","Type":"ContainerStarted","Data":"528eb7148a5423638a1bd6b175397eb053ca79a5b6c1a4cc420cb55376d074c9"} Feb 17 15:56:20 crc kubenswrapper[4829]: I0217 15:56:20.878842 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-844h6" podStartSLOduration=72.878767976 podStartE2EDuration="1m12.878767976s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:20.876029238 +0000 UTC m=+93.293047276" watchObservedRunningTime="2026-02-17 15:56:20.878767976 +0000 UTC m=+93.295786004" Feb 17 15:56:21 crc kubenswrapper[4829]: I0217 15:56:21.278908 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:21 crc kubenswrapper[4829]: E0217 15:56:21.279298 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278472 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:22 crc kubenswrapper[4829]: I0217 15:56:22.278508 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:22 crc kubenswrapper[4829]: E0217 15:56:22.278954 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:23 crc kubenswrapper[4829]: I0217 15:56:23.278902 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:23 crc kubenswrapper[4829]: E0217 15:56:23.279515 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:23 crc kubenswrapper[4829]: I0217 15:56:23.279982 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:23 crc kubenswrapper[4829]: E0217 15:56:23.280273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279143 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:24 crc kubenswrapper[4829]: I0217 15:56:24.279169 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279252 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:24 crc kubenswrapper[4829]: E0217 15:56:24.279694 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:25 crc kubenswrapper[4829]: I0217 15:56:25.278476 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4829]: E0217 15:56:25.278926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.278991 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.279203 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.279204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.279334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:26 crc kubenswrapper[4829]: I0217 15:56:26.280156 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:26 crc kubenswrapper[4829]: E0217 15:56:26.280459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:27 crc kubenswrapper[4829]: I0217 15:56:27.279239 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:27 crc kubenswrapper[4829]: E0217 15:56:27.279425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.257356 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.257772 4829 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.257876 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs podName:9c29406b-a65e-4386-8f7c-ac9dc76fb4cb nodeName:}" failed. No retries permitted until 2026-02-17 15:57:32.257844537 +0000 UTC m=+164.674862555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs") pod "network-metrics-daemon-xdb29" (UID: "9c29406b-a65e-4386-8f7c-ac9dc76fb4cb") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.278523 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.278649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.278705 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.278834 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:28 crc kubenswrapper[4829]: I0217 15:56:28.279193 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:28 crc kubenswrapper[4829]: E0217 15:56:28.280866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:29 crc kubenswrapper[4829]: I0217 15:56:29.279015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:29 crc kubenswrapper[4829]: E0217 15:56:29.279178 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.278729 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.278882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279097 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:30 crc kubenswrapper[4829]: I0217 15:56:30.279136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:30 crc kubenswrapper[4829]: E0217 15:56:30.279444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:31 crc kubenswrapper[4829]: I0217 15:56:31.278831 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:31 crc kubenswrapper[4829]: E0217 15:56:31.279016 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.278859 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279065 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.278888 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:32 crc kubenswrapper[4829]: I0217 15:56:32.279155 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:32 crc kubenswrapper[4829]: E0217 15:56:32.279485 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:33 crc kubenswrapper[4829]: I0217 15:56:33.278805 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:33 crc kubenswrapper[4829]: E0217 15:56:33.279128 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.278743 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.278816 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.278949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.279028 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.279288 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.279423 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:34 crc kubenswrapper[4829]: I0217 15:56:34.280691 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:34 crc kubenswrapper[4829]: E0217 15:56:34.280999 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hjd7r_openshift-ovn-kubernetes(fad9f982-deda-446c-8801-dc47104eee62)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" Feb 17 15:56:35 crc kubenswrapper[4829]: I0217 15:56:35.278633 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:35 crc kubenswrapper[4829]: E0217 15:56:35.278793 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279264 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279345 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279463 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:36 crc kubenswrapper[4829]: I0217 15:56:36.279503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279707 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:36 crc kubenswrapper[4829]: E0217 15:56:36.279843 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:37 crc kubenswrapper[4829]: I0217 15:56:37.278615 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:37 crc kubenswrapper[4829]: E0217 15:56:37.278806 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.278855 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:38 crc kubenswrapper[4829]: I0217 15:56:38.279434 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283617 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:38 crc kubenswrapper[4829]: E0217 15:56:38.283987 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:39 crc kubenswrapper[4829]: I0217 15:56:39.278690 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:39 crc kubenswrapper[4829]: E0217 15:56:39.278893 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278455 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278518 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:40 crc kubenswrapper[4829]: I0217 15:56:40.278553 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278820 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:40 crc kubenswrapper[4829]: E0217 15:56:40.278953 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:41 crc kubenswrapper[4829]: I0217 15:56:41.278610 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:41 crc kubenswrapper[4829]: E0217 15:56:41.278783 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.278952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.279715 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.279186 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.280030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:42 crc kubenswrapper[4829]: I0217 15:56:42.279086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:42 crc kubenswrapper[4829]: E0217 15:56:42.280302 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.278933 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:43 crc kubenswrapper[4829]: E0217 15:56:43.279375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951027 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951934 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/0.log" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.951995 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" exitCode=1 Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7"} Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952082 4829 scope.go:117] "RemoveContainer" containerID="644e45c5c3d381ec6982b39ba63fbe2f0b03922e41ad892f3b3b6dc243a2773b" Feb 17 15:56:43 crc kubenswrapper[4829]: I0217 15:56:43.952691 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 15:56:43 crc kubenswrapper[4829]: E0217 15:56:43.952989 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279231 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.279475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279972 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:44 crc kubenswrapper[4829]: E0217 15:56:44.279818 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:44 crc kubenswrapper[4829]: I0217 15:56:44.957323 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:45 crc kubenswrapper[4829]: I0217 15:56:45.278708 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:45 crc kubenswrapper[4829]: E0217 15:56:45.278955 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279087 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279131 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:46 crc kubenswrapper[4829]: I0217 15:56:46.279086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279454 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:46 crc kubenswrapper[4829]: E0217 15:56:46.279569 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:47 crc kubenswrapper[4829]: I0217 15:56:47.279192 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:47 crc kubenswrapper[4829]: E0217 15:56:47.279368 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.279053 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.279286 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.281136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281530 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.281650 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.282933 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.287352 4829 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 15:56:48 crc kubenswrapper[4829]: E0217 15:56:48.403866 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.973928 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.976915 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerStarted","Data":"eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6"} Feb 17 15:56:48 crc kubenswrapper[4829]: I0217 15:56:48.977784 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.018985 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podStartSLOduration=100.018968601 podStartE2EDuration="1m40.018968601s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:49.016512554 +0000 UTC m=+121.433530532" watchObservedRunningTime="2026-02-17 15:56:49.018968601 +0000 UTC m=+121.435986579" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.278984 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:49 crc kubenswrapper[4829]: E0217 15:56:49.279155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.297024 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:56:49 crc kubenswrapper[4829]: I0217 15:56:49.297148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:49 crc kubenswrapper[4829]: E0217 15:56:49.297247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:50 crc kubenswrapper[4829]: I0217 15:56:50.278839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:50 crc kubenswrapper[4829]: E0217 15:56:50.279305 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:50 crc kubenswrapper[4829]: I0217 15:56:50.279627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:50 crc kubenswrapper[4829]: E0217 15:56:50.279739 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:51 crc kubenswrapper[4829]: I0217 15:56:51.278276 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:51 crc kubenswrapper[4829]: I0217 15:56:51.278295 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:51 crc kubenswrapper[4829]: E0217 15:56:51.278410 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:51 crc kubenswrapper[4829]: E0217 15:56:51.278829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:52 crc kubenswrapper[4829]: I0217 15:56:52.278701 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:52 crc kubenswrapper[4829]: I0217 15:56:52.278718 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:52 crc kubenswrapper[4829]: E0217 15:56:52.279109 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:52 crc kubenswrapper[4829]: E0217 15:56:52.278965 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:53 crc kubenswrapper[4829]: I0217 15:56:53.279004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:53 crc kubenswrapper[4829]: I0217 15:56:53.279033 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.279200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.279322 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:53 crc kubenswrapper[4829]: E0217 15:56:53.405917 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:54 crc kubenswrapper[4829]: I0217 15:56:54.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:54 crc kubenswrapper[4829]: E0217 15:56:54.279150 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:54 crc kubenswrapper[4829]: I0217 15:56:54.279191 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:54 crc kubenswrapper[4829]: E0217 15:56:54.279323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:55 crc kubenswrapper[4829]: I0217 15:56:55.278503 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:55 crc kubenswrapper[4829]: E0217 15:56:55.278749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:55 crc kubenswrapper[4829]: I0217 15:56:55.279093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:55 crc kubenswrapper[4829]: E0217 15:56:55.279218 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:56 crc kubenswrapper[4829]: I0217 15:56:56.279173 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:56 crc kubenswrapper[4829]: I0217 15:56:56.279269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:56 crc kubenswrapper[4829]: E0217 15:56:56.279326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:56 crc kubenswrapper[4829]: E0217 15:56:56.279478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279266 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:57 crc kubenswrapper[4829]: E0217 15:56:57.279481 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:56:57 crc kubenswrapper[4829]: I0217 15:56:57.279713 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 15:56:57 crc kubenswrapper[4829]: E0217 15:56:57.279696 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.013407 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.013830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27"} Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.278516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:58 crc kubenswrapper[4829]: I0217 15:56:58.278616 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.280307 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.280533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:58 crc kubenswrapper[4829]: E0217 15:56:58.406295 4829 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:59 crc kubenswrapper[4829]: I0217 15:56:59.278726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:59 crc kubenswrapper[4829]: I0217 15:56:59.278726 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:56:59 crc kubenswrapper[4829]: E0217 15:56:59.278898 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:59 crc kubenswrapper[4829]: E0217 15:56:59.279028 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:00 crc kubenswrapper[4829]: I0217 15:57:00.279164 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:00 crc kubenswrapper[4829]: E0217 15:57:00.279338 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:57:00 crc kubenswrapper[4829]: I0217 15:57:00.279439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:00 crc kubenswrapper[4829]: E0217 15:57:00.279698 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:57:01 crc kubenswrapper[4829]: I0217 15:57:01.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:01 crc kubenswrapper[4829]: I0217 15:57:01.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:01 crc kubenswrapper[4829]: E0217 15:57:01.279139 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:57:01 crc kubenswrapper[4829]: E0217 15:57:01.279275 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:02 crc kubenswrapper[4829]: I0217 15:57:02.278506 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:02 crc kubenswrapper[4829]: I0217 15:57:02.278604 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:02 crc kubenswrapper[4829]: E0217 15:57:02.278773 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:57:02 crc kubenswrapper[4829]: E0217 15:57:02.279113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:57:03 crc kubenswrapper[4829]: I0217 15:57:03.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:03 crc kubenswrapper[4829]: I0217 15:57:03.278938 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:03 crc kubenswrapper[4829]: E0217 15:57:03.279108 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xdb29" podUID="9c29406b-a65e-4386-8f7c-ac9dc76fb4cb" Feb 17 15:57:03 crc kubenswrapper[4829]: E0217 15:57:03.279238 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.278712 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.278853 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282196 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282234 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282732 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:57:04 crc kubenswrapper[4829]: I0217 15:57:04.282903 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.279064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.279081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.282329 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:57:05 crc kubenswrapper[4829]: I0217 15:57:05.282349 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.530339 4829 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.580856 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.581815 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.583215 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.583990 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.584121 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.591698 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.594211 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.594614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.595189 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615014 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615168 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.615682 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616145 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616406 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.616822 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.617209 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.619900 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.619960 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620119 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620261 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620270 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620309 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620357 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620380 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620481 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620549 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.620712 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.621219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.621374 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622380 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622794 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.622807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.623254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.624951 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.625280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.625866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.626236 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.626729 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.632075 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.632346 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633530 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633753 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633843 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.633949 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634100 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634170 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634239 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634378 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634624 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634736 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634793 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634744 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634873 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.634919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635302 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635458 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.635790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.637900 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638118 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638433 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638632 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638766 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.638983 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.639640 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645106 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645466 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.645561 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.646262 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.646919 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647149 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647299 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647360 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.647596 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648220 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648428 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.648638 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.649949 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.650526 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651097 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651184 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.651778 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.652254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.661182 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.661611 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672051 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672116 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672048 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672376 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.672926 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.673806 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.678795 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.679357 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.688828 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.688970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.689322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.689968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.690606 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.691054 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692783 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692872 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692894 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692919 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.692980 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693002 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693053 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693147 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693171 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693193 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693335 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693439 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693680 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.693932 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.694092 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.694615 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.696735 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.698299 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.698633 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.700018 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.700406 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.701182 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.701554 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.713331 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.713684 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714043 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714406 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5rwbn"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714641 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.714764 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715006 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715111 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715274 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715447 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715739 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715798 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715870 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.715980 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716090 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716474 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716514 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.716963 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717070 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717201 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717299 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717403 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.717444 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.719447 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720003 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.720084 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724022 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724202 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724439 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724477 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724658 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.724896 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.726026 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.726934 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.727504 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728078 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728760 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.728808 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.729281 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.736246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.736519 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.742651 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.752752 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753131 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753803 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.753887 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.755794 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.767862 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.768463 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.768760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.769937 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.773176 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.774795 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.775465 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.775915 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.776374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.776439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.778261 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.786002 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.789905 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.790093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.790871 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.791358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.791687 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.792311 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.792512 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793801 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793820 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793932 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.793974 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794001 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794064 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794086 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794109 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794129 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794160 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5x4hf"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794765 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794990 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795198 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.794175 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-policies\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795600 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-image-import-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795607 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795736 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c801e449-c529-4c10-a482-f6f3a8c24bb1-audit-dir\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795772 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-audit-dir\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.796963 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-config\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.796968 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-serving-ca\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.797345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8bea1514-e813-4a49-80fb-cb8de9827a40-node-pullsecrets\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.795648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8bea1514-e813-4a49-80fb-cb8de9827a40-audit\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.800078 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.800715 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e8a98667-8884-4056-8577-3e7db8762ff9-images\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.808226 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.808451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-serving-cert\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.811608 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-etcd-client\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812432 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-encryption-config\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.812516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-etcd-client\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.814305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8bea1514-e813-4a49-80fb-cb8de9827a40-encryption-config\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.815422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8a98667-8884-4056-8577-3e7db8762ff9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.816318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.817626 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c801e449-c529-4c10-a482-f6f3a8c24bb1-serving-cert\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.818280 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.820014 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.821338 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.822595 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.825310 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.825733 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.826898 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.827667 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.832514 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.832814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.834566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.836922 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.838540 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.839394 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.840609 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.841335 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.846178 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.847084 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.847529 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.849478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.855547 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.859633 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.862326 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.864788 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.866218 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.866252 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.867704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.869100 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.870559 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.872023 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.873393 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.876366 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.877655 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.879319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.880407 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.881848 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.883229 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.884612 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.886055 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.886309 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.887077 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.887105 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.888253 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.919023 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.927070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.946414 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.966143 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:57:10 crc kubenswrapper[4829]: I0217 15:57:10.986438 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.006449 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.026146 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.046402 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.067164 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.086993 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.106470 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.126853 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.146378 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.166408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.186990 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.206278 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.227134 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.247120 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.275845 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.286543 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.306689 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.326669 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.346614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.366651 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.387391 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.406477 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.427032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.447545 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.466836 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.486969 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.506362 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.527102 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.546089 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.566824 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.586256 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.606946 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.626941 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.645444 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.666189 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705170 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705381 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705426 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705562 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705650 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705690 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705810 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.705939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706528 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706606 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.706846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707017 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707120 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707183 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707215 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707360 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.707521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.707727 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.207709478 +0000 UTC m=+144.624727576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708041 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708186 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708223 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708493 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708547 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708608 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708686 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708719 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708784 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708820 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708864 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708955 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.708998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709114 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709410 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709688 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709900 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.709982 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710052 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710082 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710112 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710140 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710187 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710287 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710331 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710368 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710448 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710514 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710557 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710796 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710891 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.710951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711287 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711414 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711476 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711520 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711563 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711714 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711813 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711843 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711888 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711918 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.711976 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.712019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.726841 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.743455 4829 request.go:700] Waited for 1.014777653s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.745717 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.766306 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.786400 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.805898 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813071 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.813241 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.313199769 +0000 UTC m=+144.730217787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813351 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813436 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813470 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813550 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813689 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813774 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813923 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.813971 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814061 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814489 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814715 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814790 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814826 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814857 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814929 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814960 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.814990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815023 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815046 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44a4515e-e65a-4069-bcfe-d84494a724cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815080 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815114 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815155 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815245 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815321 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815356 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815358 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815475 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815518 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815562 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815644 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815804 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815878 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.815990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816022 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816101 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816136 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816237 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816265 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-trusted-ca\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816426 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816468 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816571 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816637 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816725 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816809 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816876 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816911 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816976 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.816973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.817008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.817061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410fb51-b781-4989-ba46-c7c6b189188b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.818780 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819698 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819890 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819943 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.819989 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820092 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820142 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820347 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820397 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-auth-proxy-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.820498 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e15283-b4a3-40c9-8117-77d662f30438-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821296 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.821489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67525a8a-c8e8-469c-a60d-1676ac5b057e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.822091 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.822165 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-config\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f36b68-dd7a-41a7-86ff-ebcf90897710-config\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.823953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824062 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824524 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824615 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824742 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-metrics-tls\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824854 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824904 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.824974 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825030 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825082 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825179 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825260 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825341 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825379 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825416 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.825918 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.826197 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-config\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.826434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca2091-de8d-469c-832b-057ee57bb8ee-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.827896 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.827902 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ed6518-2fbf-4aa0-b136-d605a9cb972a-config\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.828750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.829696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.830259 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831743 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831905 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.831976 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832036 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832249 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832302 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832352 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832401 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.832450 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.833490 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-client\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.834293 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.332710838 +0000 UTC m=+144.749728846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834632 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.834981 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835074 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.835788 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836226 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b184f73-7f44-4ddb-b344-a5a635501c7d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.836305 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.837421 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca2091-de8d-469c-832b-057ee57bb8ee-config\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838115 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a11950-91e2-4d36-9d60-341b9a6b21b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838148 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.838664 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.839258 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6a1e674-b813-4a95-b14e-a2774f390155-etcd-service-ca\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.840254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-serving-cert\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.841818 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.842848 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ed6518-2fbf-4aa0-b136-d605a9cb972a-serving-cert\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.842940 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-metrics-certs\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.843481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.843953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-stats-auth\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.844315 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.844913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0af9147-4f17-470b-a49e-5a75ff9b5005-trusted-ca\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a717f8-3264-4540-b132-ab42accb57f0-service-ca-bundle\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.845868 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6410fb51-b781-4989-ba46-c7c6b189188b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846117 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846389 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a11950-91e2-4d36-9d60-341b9a6b21b2-config\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e417c4d-c6be-42e9-a72a-9021805d4f7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/546891ca-dff6-4af9-a495-8bdd561e4233-serving-cert\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846884 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67525a8a-c8e8-469c-a60d-1676ac5b057e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.846954 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-service-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847367 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847425 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/546891ca-dff6-4af9-a495-8bdd561e4233-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847658 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.847923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e15283-b4a3-40c9-8117-77d662f30438-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.848093 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.848804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.849016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5a717f8-3264-4540-b132-ab42accb57f0-default-certificate\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.849123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0af9147-4f17-470b-a49e-5a75ff9b5005-metrics-tls\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.850163 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e3f36b68-dd7a-41a7-86ff-ebcf90897710-machine-approver-tls\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.851085 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6a1e674-b813-4a95-b14e-a2774f390155-serving-cert\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.851171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44a4515e-e65a-4069-bcfe-d84494a724cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.852386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.852791 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.854262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b184f73-7f44-4ddb-b344-a5a635501c7d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.865902 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.885908 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.905880 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.925116 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.939429 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.939756 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.439673378 +0000 UTC m=+144.856691386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.939901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940102 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940242 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940277 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940329 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940426 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940464 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: E0217 15:57:11.940654 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.440627704 +0000 UTC m=+144.857645712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940870 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.940977 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941037 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941130 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.941162 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.944364 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.944442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945431 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945523 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-mountpoint-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.945982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946145 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946346 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946506 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946842 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946882 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.946975 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947047 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947124 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947344 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947477 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947698 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947764 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947874 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-registration-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.947958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-srv-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.948684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-plugins-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.948891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c0ad3e99-7312-4c48-bbfc-5355df896d20-tmpfs\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.949510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-csi-data-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.949688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/316979dc-a708-402a-94b0-d4d6bad3c7ca-socket-dir\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8264089d-eadc-4f77-9884-c162be2861fa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fffa6856-9b00-44e9-81c6-643defb47c04-images\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.950508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2f48424-451a-4a3a-a539-eb6ad78c8944-profile-collector-cert\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957487 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26589ee7-3777-43d9-b378-df92780df986-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fffa6856-9b00-44e9-81c6-643defb47c04-proxy-tls\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.957899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.961196 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-key\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.969202 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.985939 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:57:11 crc kubenswrapper[4829]: I0217 15:57:11.991041 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9061d74f-5644-4fa3-8484-4bcf2508dbfa-signing-cabundle\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.005370 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.025173 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.045247 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.049462 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.049847 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.549818024 +0000 UTC m=+144.966836042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.050378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.051039 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.551006847 +0000 UTC m=+144.968024875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.057659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8264089d-eadc-4f77-9884-c162be2861fa-proxy-tls\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.065372 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.085413 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.106394 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.126312 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.139489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34421a4c-a917-467e-938b-fe7e00cc76c4-srv-cert\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.146522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.152048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.152249 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.65222054 +0000 UTC m=+145.069238558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.153037 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.153652 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.653628959 +0000 UTC m=+145.070646967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.156641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84cacb3d-ec7c-4a92-a265-237ea9218b5e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.165731 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.185212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.204953 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.210219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf1e080-f5b6-4360-a74f-5524ece2120c-config\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.225810 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.236206 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf1e080-f5b6-4360-a74f-5524ece2120c-serving-cert\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.246542 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.254751 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.254998 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.754966846 +0000 UTC m=+145.171984864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.255561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.256139 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.756110237 +0000 UTC m=+145.173128265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.266600 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.270848 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.286089 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.306156 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.326193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.347282 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.353649 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.356559 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.356794 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.856771666 +0000 UTC m=+145.273789684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.357199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.357668 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.85764495 +0000 UTC m=+145.274662968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.376247 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.382189 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.385256 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.405916 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.414760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.426163 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.445746 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.452463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-webhook-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.452763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0ad3e99-7312-4c48-bbfc-5355df896d20-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.461623 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.461745 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.961714451 +0000 UTC m=+145.378732469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.462521 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.463020 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:12.963000956 +0000 UTC m=+145.380018954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.485408 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dzw\" (UniqueName: \"kubernetes.io/projected/8bea1514-e813-4a49-80fb-cb8de9827a40-kube-api-access-j5dzw\") pod \"apiserver-76f77b778f-pdm8f\" (UID: \"8bea1514-e813-4a49-80fb-cb8de9827a40\") " pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.501403 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m222s\" (UniqueName: \"kubernetes.io/projected/c801e449-c529-4c10-a482-f6f3a8c24bb1-kube-api-access-m222s\") pod \"apiserver-7bbb656c7d-lbqc5\" (UID: \"c801e449-c529-4c10-a482-f6f3a8c24bb1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.505999 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.515323 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-node-bootstrap-token\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.525290 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.535468 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c67dea52-b0b7-4b48-80e1-54d9754487ed-certs\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.564306 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.564424 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.064396385 +0000 UTC m=+145.481414403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.564894 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.565202 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.065194007 +0000 UTC m=+145.482211985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.566317 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.571320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49q6l\" (UniqueName: \"kubernetes.io/projected/e8a98667-8884-4056-8577-3e7db8762ff9-kube-api-access-49q6l\") pod \"machine-api-operator-5694c8668f-47kpc\" (UID: \"e8a98667-8884-4056-8577-3e7db8762ff9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.585258 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.590342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b341af34-7b4a-4137-adc0-eb743588d455-config-volume\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.605935 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.626625 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.636885 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b341af34-7b4a-4137-adc0-eb743588d455-metrics-tls\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.645458 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.665062 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.666731 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.667136 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.16710048 +0000 UTC m=+145.584118508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.668088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.668644 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.168617831 +0000 UTC m=+145.585635849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.670186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b45ddda-3269-494c-b1d6-c1219a8f61db-cert\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.685919 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.705099 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.725829 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.735006 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.744134 4829 request.go:700] Waited for 1.856815812s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.746814 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.761016 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.765810 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.769126 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.769371 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.269326121 +0000 UTC m=+145.686344159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.770165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.770739 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.270712579 +0000 UTC m=+145.687730667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.792995 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.836530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96hm\" (UniqueName: \"kubernetes.io/projected/a5a717f8-3264-4540-b132-ab42accb57f0-kube-api-access-q96hm\") pod \"router-default-5444994796-5rwbn\" (UID: \"a5a717f8-3264-4540-b132-ab42accb57f0\") " pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.861778 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44vt\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-kube-api-access-z44vt\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.872225 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.872464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.372434486 +0000 UTC m=+145.789452474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.872711 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.873272 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.373250549 +0000 UTC m=+145.790268537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.879313 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.892992 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lgr\" (UniqueName: \"kubernetes.io/projected/f73ce613-5317-4f8e-82c9-4af380ed614c-kube-api-access-w6lgr\") pod \"downloads-7954f5f757-2sdwc\" (UID: \"f73ce613-5317-4f8e-82c9-4af380ed614c\") " pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.915044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b74hl\" (UniqueName: \"kubernetes.io/projected/90ed6518-2fbf-4aa0-b136-d605a9cb972a-kube-api-access-b74hl\") pod \"console-operator-58897d9998-fq9th\" (UID: \"90ed6518-2fbf-4aa0-b136-d605a9cb972a\") " pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.936298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"console-f9d7485db-9fgb2\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.951913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67525a8a-c8e8-469c-a60d-1676ac5b057e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8v8bb\" (UID: \"67525a8a-c8e8-469c-a60d-1676ac5b057e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.964996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.974406 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:12 crc kubenswrapper[4829]: E0217 15:57:12.974905 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.474890934 +0000 UTC m=+145.891908912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.980182 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.996797 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:12 crc kubenswrapper[4829]: I0217 15:57:12.997074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.014809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0af9147-4f17-470b-a49e-5a75ff9b5005-bound-sa-token\") pod \"ingress-operator-5b745b69d9-clr5s\" (UID: \"d0af9147-4f17-470b-a49e-5a75ff9b5005\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.036227 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a717f8_3264_4540_b132_ab42accb57f0.slice/crio-5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af WatchSource:0}: Error finding container 5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af: Status 404 returned error can't find the container with id 5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.036331 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.040273 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptbp\" (UniqueName: \"kubernetes.io/projected/e3f36b68-dd7a-41a7-86ff-ebcf90897710-kube-api-access-tptbp\") pod \"machine-approver-56656f9798-kb5nv\" (UID: \"e3f36b68-dd7a-41a7-86ff-ebcf90897710\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.058426 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496nb\" (UniqueName: \"kubernetes.io/projected/6410fb51-b781-4989-ba46-c7c6b189188b-kube-api-access-496nb\") pod \"openshift-apiserver-operator-796bbdcf4f-nnktd\" (UID: \"6410fb51-b781-4989-ba46-c7c6b189188b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.067923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd2z\" (UniqueName: \"kubernetes.io/projected/2b184f73-7f44-4ddb-b344-a5a635501c7d-kube-api-access-ntd2z\") pod \"cluster-image-registry-operator-dc59b4c8b-swcxx\" (UID: \"2b184f73-7f44-4ddb-b344-a5a635501c7d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.076043 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.076464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.576445487 +0000 UTC m=+145.993463465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.079231 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pdm8f"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.084962 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6c9l\" (UniqueName: \"kubernetes.io/projected/546891ca-dff6-4af9-a495-8bdd561e4233-kube-api-access-h6c9l\") pod \"authentication-operator-69f744f599-5m4j8\" (UID: \"546891ca-dff6-4af9-a495-8bdd561e4233\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.089019 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rwbn" event={"ID":"a5a717f8-3264-4540-b132-ab42accb57f0","Type":"ContainerStarted","Data":"5ede7cb411b95dbffe6dd92b42c4e86720784e8aabf8040beee6bfc2671a42af"} Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.103268 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmb6n\" (UniqueName: \"kubernetes.io/projected/c5ad87cd-b97f-483a-825a-46c77bd5d5e0-kube-api-access-jmb6n\") pod \"openshift-config-operator-7777fb866f-fbwnl\" (UID: \"c5ad87cd-b97f-483a-825a-46c77bd5d5e0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.115210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.119683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdltg\" (UniqueName: \"kubernetes.io/projected/4e417c4d-c6be-42e9-a72a-9021805d4f7c-kube-api-access-xdltg\") pod \"cluster-samples-operator-665b6dd947-cgntr\" (UID: \"4e417c4d-c6be-42e9-a72a-9021805d4f7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.141015 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.145967 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"route-controller-manager-6576b87f9c-9v7jj\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.163187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a11950-91e2-4d36-9d60-341b9a6b21b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6pkfx\" (UID: \"87a11950-91e2-4d36-9d60-341b9a6b21b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.177454 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.178303 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.179180 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.679134651 +0000 UTC m=+146.096152629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.184326 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ca2091-de8d-469c-832b-057ee57bb8ee-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6f6lw\" (UID: \"76ca2091-de8d-469c-832b-057ee57bb8ee\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.186561 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.199700 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.201159 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"oauth-openshift-558db77b4-8kmp8\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.222958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6szn7\" (UniqueName: \"kubernetes.io/projected/32e15283-b4a3-40c9-8117-77d662f30438-kube-api-access-6szn7\") pod \"openshift-controller-manager-operator-756b6f6bc6-z29z2\" (UID: \"32e15283-b4a3-40c9-8117-77d662f30438\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.239239 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.247263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.249452 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jflb5\" (UniqueName: \"kubernetes.io/projected/5c008a05-c20f-4b78-b8f3-0ebb1ccf6569-kube-api-access-jflb5\") pod \"dns-operator-744455d44c-2zdl6\" (UID: \"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569\") " pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.254000 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.267499 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.267516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bv8\" (UniqueName: \"kubernetes.io/projected/44a4515e-e65a-4069-bcfe-d84494a724cd-kube-api-access-l6bv8\") pod \"kube-storage-version-migrator-operator-b67b599dd-2l44d\" (UID: \"44a4515e-e65a-4069-bcfe-d84494a724cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.271265 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.281518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.281879 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.781864577 +0000 UTC m=+146.198882555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.284502 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2n8l\" (UniqueName: \"kubernetes.io/projected/d6a1e674-b813-4a95-b14e-a2774f390155-kube-api-access-b2n8l\") pod \"etcd-operator-b45778765-xjtlq\" (UID: \"d6a1e674-b813-4a95-b14e-a2774f390155\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.289865 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302268 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-47kpc"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"controller-manager-879f6c89f-xn8fx\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.302744 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.303678 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.316483 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.317153 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67525a8a_c8e8_469c_a60d_1676ac5b057e.slice/crio-d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61 WatchSource:0}: Error finding container d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61: Status 404 returned error can't find the container with id d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61 Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.321701 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.333425 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.333609 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhs7h\" (UniqueName: \"kubernetes.io/projected/316979dc-a708-402a-94b0-d4d6bad3c7ca-kube-api-access-rhs7h\") pod \"csi-hostpathplugin-rrc2k\" (UID: \"316979dc-a708-402a-94b0-d4d6bad3c7ca\") " pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.334604 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f36b68_dd7a_41a7_86ff_ebcf90897710.slice/crio-aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57 WatchSource:0}: Error finding container aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57: Status 404 returned error can't find the container with id aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57 Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.340769 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.343442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"collect-profiles-29522385-m5vfb\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.349789 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.358602 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.360099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4vr\" (UniqueName: \"kubernetes.io/projected/fffa6856-9b00-44e9-81c6-643defb47c04-kube-api-access-rz4vr\") pod \"machine-config-operator-74547568cd-m79xc\" (UID: \"fffa6856-9b00-44e9-81c6-643defb47c04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.365941 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.387991 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.388681 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.888665152 +0000 UTC m=+146.305683130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.390894 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkpf7\" (UniqueName: \"kubernetes.io/projected/c67dea52-b0b7-4b48-80e1-54d9754487ed-kube-api-access-mkpf7\") pod \"machine-config-server-5x4hf\" (UID: \"c67dea52-b0b7-4b48-80e1-54d9754487ed\") " pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.391118 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.398291 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.421977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjkd\" (UniqueName: \"kubernetes.io/projected/c0ad3e99-7312-4c48-bbfc-5355df896d20-kube-api-access-4bjkd\") pod \"packageserver-d55dfcdfc-hpnl2\" (UID: \"c0ad3e99-7312-4c48-bbfc-5355df896d20\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.429298 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.436032 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8zs\" (UniqueName: \"kubernetes.io/projected/b341af34-7b4a-4137-adc0-eb743588d455-kube-api-access-8t8zs\") pod \"dns-default-pcvww\" (UID: \"b341af34-7b4a-4137-adc0-eb743588d455\") " pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.438125 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fq9th"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.446061 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.448136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zp7\" (UniqueName: \"kubernetes.io/projected/1bf1e080-f5b6-4360-a74f-5524ece2120c-kube-api-access-s4zp7\") pod \"service-ca-operator-777779d784-mkbhc\" (UID: \"1bf1e080-f5b6-4360-a74f-5524ece2120c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.464299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wwc\" (UniqueName: \"kubernetes.io/projected/84cacb3d-ec7c-4a92-a265-237ea9218b5e-kube-api-access-s2wwc\") pod \"package-server-manager-789f6589d5-cgktd\" (UID: \"84cacb3d-ec7c-4a92-a265-237ea9218b5e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.471385 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.477929 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rtj\" (UniqueName: \"kubernetes.io/projected/2bfb2da7-1a85-42f9-8c3f-c7997e85dd58-kube-api-access-d7rtj\") pod \"control-plane-machine-set-operator-78cbb6b69f-sqmls\" (UID: \"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.480913 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5x4hf" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.489510 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.489937 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:13.989896086 +0000 UTC m=+146.406914064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.493496 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.503131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976wz\" (UniqueName: \"kubernetes.io/projected/34421a4c-a917-467e-938b-fe7e00cc76c4-kube-api-access-976wz\") pod \"olm-operator-6b444d44fb-wj6cl\" (UID: \"34421a4c-a917-467e-938b-fe7e00cc76c4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.514932 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.523514 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8hh\" (UniqueName: \"kubernetes.io/projected/9061d74f-5644-4fa3-8484-4bcf2508dbfa-kube-api-access-sv8hh\") pod \"service-ca-9c57cc56f-8wp4k\" (UID: \"9061d74f-5644-4fa3-8484-4bcf2508dbfa\") " pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.534644 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.538617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpnc\" (UniqueName: \"kubernetes.io/projected/9b45ddda-3269-494c-b1d6-c1219a8f61db-kube-api-access-zbpnc\") pod \"ingress-canary-dmlvg\" (UID: \"9b45ddda-3269-494c-b1d6-c1219a8f61db\") " pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.561922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnmh\" (UniqueName: \"kubernetes.io/projected/d2f48424-451a-4a3a-a539-eb6ad78c8944-kube-api-access-vpnmh\") pod \"catalog-operator-68c6474976-6c88x\" (UID: \"d2f48424-451a-4a3a-a539-eb6ad78c8944\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.581614 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztz9w\" (UniqueName: \"kubernetes.io/projected/708b9214-1619-4dff-a626-027ee223f939-kube-api-access-ztz9w\") pod \"migrator-59844c95c7-krtjv\" (UID: \"708b9214-1619-4dff-a626-027ee223f939\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:13 crc kubenswrapper[4829]: W0217 15:57:13.582908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b184f73_7f44_4ddb_b344_a5a635501c7d.slice/crio-5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f WatchSource:0}: Error finding container 5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f: Status 404 returned error can't find the container with id 5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.590423 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.590889 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.090868564 +0000 UTC m=+146.507886542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.603762 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frpl9\" (UniqueName: \"kubernetes.io/projected/8264089d-eadc-4f77-9884-c162be2861fa-kube-api-access-frpl9\") pod \"machine-config-controller-84d6567774-m5kf7\" (UID: \"8264089d-eadc-4f77-9884-c162be2861fa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.622183 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcn4t\" (UniqueName: \"kubernetes.io/projected/26589ee7-3777-43d9-b378-df92780df986-kube-api-access-mcn4t\") pod \"multus-admission-controller-857f4d67dd-pt2fg\" (UID: \"26589ee7-3777-43d9-b378-df92780df986\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.638272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"marketplace-operator-79b997595-zn4qs\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.653212 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.676373 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.682827 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.683225 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.696413 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.697834 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.197817234 +0000 UTC m=+146.614835212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.699539 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.711773 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.715966 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728041 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2sdwc"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728142 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.728529 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.729150 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2zdl6"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.736108 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.752558 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.762428 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.797639 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.798003 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.297987269 +0000 UTC m=+146.715005237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.804801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dmlvg" Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.899481 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:13 crc kubenswrapper[4829]: E0217 15:57:13.899806 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.39979325 +0000 UTC m=+146.816811228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.915464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5m4j8"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.961272 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d"] Feb 17 15:57:13 crc kubenswrapper[4829]: I0217 15:57:13.981242 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.010929 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.011299 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.511282673 +0000 UTC m=+146.928300651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.025540 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.055466 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a4515e_e65a_4069_bcfe_d84494a724cd.slice/crio-978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349 WatchSource:0}: Error finding container 978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349: Status 404 returned error can't find the container with id 978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349 Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.093870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"566daf6ef97a21afbb106de727058b5ec5000fee0dfa3b6a1036b5c171adcbe9"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.094992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" event={"ID":"67525a8a-c8e8-469c-a60d-1676ac5b057e","Type":"ContainerStarted","Data":"d9145bfee2db2d875b307b678d4cf6ed66b1db420bd6c93c371a10017252aa61"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.096646 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"603fbe2bbf17c826cfad591ff76754f7ecaa69aaf747d706366365ecc1add41d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.096667 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"316dd5f02c346c16ef62cf763a938e846701064a20818af5bda732cce8e72df1"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.101262 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.102876 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"aaed2ef7c35bbeb7a0373949d58eb8ef3fdd84fd0534b380317111ffb70a7b57"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.106974 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5x4hf" event={"ID":"c67dea52-b0b7-4b48-80e1-54d9754487ed","Type":"ContainerStarted","Data":"42f71487e6c9416d650fb9479378cc5eafc93ef527535d64bc2f9be928c2e21b"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.108753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerStarted","Data":"6a23ac3a0952fee762d7b612b6d50abf950d5b8d2ac6689a55a814e3e26c2a02"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.109956 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerStarted","Data":"a4dd5884310a79cb7487b5f3cbe05eafb8d2a2c5440edad3ee0322f1cc8a15db"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.111056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"eed481f7d9690d5cd33c3bebacd3a1a1dad55b78483672ccb89eb85c02c576ac"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.112027 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.112354 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.612343622 +0000 UTC m=+147.029361600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.119926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerStarted","Data":"543bcf505a6976b4cac43a8840910c402bdffc26734b407176ab019a3047a028"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.122042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5rwbn" event={"ID":"a5a717f8-3264-4540-b132-ab42accb57f0","Type":"ContainerStarted","Data":"4dc7f3d9fbd69c6b3bc32848725cef8ee9c30f51518454b3233f7773a7d7124d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.124032 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2sdwc" event={"ID":"f73ce613-5317-4f8e-82c9-4af380ed614c","Type":"ContainerStarted","Data":"4f581f5407a6a10e129097935adf47fd9662a2d23b30d8744f71fa374c086d98"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126182 4829 generic.go:334] "Generic (PLEG): container finished" podID="8bea1514-e813-4a49-80fb-cb8de9827a40" containerID="7e949a1d2aec2e7d5eedff72e200761ca5a220197097ae30241195a97cb781de" exitCode=0 Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126258 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerDied","Data":"7e949a1d2aec2e7d5eedff72e200761ca5a220197097ae30241195a97cb781de"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.126279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"986bf2e4716199b7eac93016c0621eb1eebd1297e66326732489ae500ece8e31"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.136052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" event={"ID":"2b184f73-7f44-4ddb-b344-a5a635501c7d","Type":"ContainerStarted","Data":"5ef36e16aa06bc8181c9670f6577901ce907440a041ab3ba82612a3627f8e15f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.137369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" event={"ID":"6410fb51-b781-4989-ba46-c7c6b189188b","Type":"ContainerStarted","Data":"7b15d8bc2751bd8736b3944e22fd70049f55782f76acc7c7bd4cd02aec3f909d"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.137411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" event={"ID":"6410fb51-b781-4989-ba46-c7c6b189188b","Type":"ContainerStarted","Data":"0cfce608f42d4974b1b6247e7a23a286e416801b3399c689c92146a376e0ffa2"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.138977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq9th" event={"ID":"90ed6518-2fbf-4aa0-b136-d605a9cb972a","Type":"ContainerStarted","Data":"527719d05c26405f4f5254bcac7772cc42df0d531a22d05e6cb2bd21a5c61a4f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.139002 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fq9th" event={"ID":"90ed6518-2fbf-4aa0-b136-d605a9cb972a","Type":"ContainerStarted","Data":"6d63051986b02c5b3c19ad353aa74e0dfd6e12ac87e0899288bc275c04f0c22f"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.139171 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.140097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" event={"ID":"44a4515e-e65a-4069-bcfe-d84494a724cd","Type":"ContainerStarted","Data":"978f5f5767414e7f2a61137f2fae08ee5e7510ed0e1e3748d2c5a2e44aaf4349"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.140764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" event={"ID":"546891ca-dff6-4af9-a495-8bdd561e4233","Type":"ContainerStarted","Data":"9affd3ab68fd7b3c20e771fcd2f9967cee71ac3b87ea7c8b798d4dcf33912d21"} Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.141101 4829 patch_prober.go:28] interesting pod/console-operator-58897d9998-fq9th container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.141134 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fq9th" podUID="90ed6518-2fbf-4aa0-b136-d605a9cb972a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.212777 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.213785 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.713764972 +0000 UTC m=+147.130782950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.264879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.271790 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.277779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.314334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.315166 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.815151761 +0000 UTC m=+147.232169739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.324748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc"] Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.402751 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffa6856_9b00_44e9_81c6_643defb47c04.slice/crio-73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa WatchSource:0}: Error finding container 73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa: Status 404 returned error can't find the container with id 73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.403978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.415045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.415314 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:14.915289065 +0000 UTC m=+147.332307043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.454636 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5rwbn" podStartSLOduration=125.454616381 podStartE2EDuration="2m5.454616381s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:14.453225234 +0000 UTC m=+146.870243212" watchObservedRunningTime="2026-02-17 15:57:14.454616381 +0000 UTC m=+146.871634359" Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.517131 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.518971 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.018952496 +0000 UTC m=+147.435970464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.618035 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.618190 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.118167286 +0000 UTC m=+147.535185264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.618510 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.619476 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.119462301 +0000 UTC m=+147.536480279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.687393 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.693920 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.713722 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xjtlq"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.715954 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.719913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.720191 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.220176901 +0000 UTC m=+147.637194879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.799876 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a11950_91e2_4d36_9d60_341b9a6b21b2.slice/crio-2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082 WatchSource:0}: Error finding container 2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082: Status 404 returned error can't find the container with id 2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082 Feb 17 15:57:14 crc kubenswrapper[4829]: W0217 15:57:14.802742 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16271aa7_2602_467c_b9aa_31c491952eb8.slice/crio-8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a WatchSource:0}: Error finding container 8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a: Status 404 returned error can't find the container with id 8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.821688 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.822050 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.322025123 +0000 UTC m=+147.739043101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.877627 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rrc2k"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.925950 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:14 crc kubenswrapper[4829]: E0217 15:57:14.926429 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.426415553 +0000 UTC m=+147.843433531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.967305 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.972215 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.984047 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.993106 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-pt2fg"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.996698 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl"] Feb 17 15:57:14 crc kubenswrapper[4829]: I0217 15:57:14.998988 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.004960 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pcvww"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.006810 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:15 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:15 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:15 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.006858 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.028290 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.028988 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.528957562 +0000 UTC m=+147.945975540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.030243 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bfb2da7_1a85_42f9_8c3f_c7997e85dd58.slice/crio-e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed WatchSource:0}: Error finding container e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed: Status 404 returned error can't find the container with id e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.087688 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26589ee7_3777_43d9_b378_df92780df986.slice/crio-ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77 WatchSource:0}: Error finding container ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77: Status 404 returned error can't find the container with id ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.130089 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.130431 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.630412884 +0000 UTC m=+148.047431152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.182694 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" event={"ID":"67525a8a-c8e8-469c-a60d-1676ac5b057e","Type":"ContainerStarted","Data":"eda41034772f7bdeb5d62d6d5e72efb5492b3343ea32a02892e68333b850b929"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.193338 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.195864 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"ddccfcb85581db635c6f227e845eea525e7383e6f9f42887aab4f29f8b92ff77"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.206008 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" event={"ID":"44a4515e-e65a-4069-bcfe-d84494a724cd","Type":"ContainerStarted","Data":"27c05fe0520ce257814b0e3d807c25eb76e86257c1879b3887842ce44ef2fcf1"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.209061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.215368 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"36113181730fa1f7beb2ced6c6c8a0ef6d23eb8fce143213df4f409c8dff428c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.215805 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8wp4k"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.217511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dmlvg"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.218820 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" event={"ID":"d6a1e674-b813-4a95-b14e-a2774f390155","Type":"ContainerStarted","Data":"6899b897eae5b2b1565ae5797a1e9ca4e653c81ed21731e840093d7888e0dc31"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.220484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" event={"ID":"2b184f73-7f44-4ddb-b344-a5a635501c7d","Type":"ContainerStarted","Data":"9c6057c154aea9504dc5e44fc8488e5f722c96abd6234e1c1dd0a168293ecd4a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.222238 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" event={"ID":"c0ad3e99-7312-4c48-bbfc-5355df896d20","Type":"ContainerStarted","Data":"d33799c0407c610df2357b8b0d4b98ad4ff169623de6bcde5686a219f69fc75a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229655 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5ad87cd-b97f-483a-825a-46c77bd5d5e0" containerID="ed02cd9d7b185c18111c340613c8ded43af8f3c079eceb18aadb241b0edf7610" exitCode=0 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerDied","Data":"ed02cd9d7b185c18111c340613c8ded43af8f3c079eceb18aadb241b0edf7610"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.229756 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerStarted","Data":"0e6c61ff90668f94006eb63d0a4e0f845c2564df697e51d8d8e7863fb74c322a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.231317 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.231704 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.731689809 +0000 UTC m=+148.148707787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.252853 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.261330 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nnktd" podStartSLOduration=127.261313692 podStartE2EDuration="2m7.261313692s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.253961803 +0000 UTC m=+147.670979781" watchObservedRunningTime="2026-02-17 15:57:15.261313692 +0000 UTC m=+147.678331670" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.262635 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x"] Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.277076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"8a4051d75d0a569d9ab067001b1eb1ef7ef5a2756c4abc2d56df35e7aaa688b4"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.299995 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" event={"ID":"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58","Type":"ContainerStarted","Data":"e1bd43ce1d065976e7fd13f105e9e94e9423f783894db0bd6ff90200b62ec0ed"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.303143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"73d3cc3fe34bc8b40a5844a71738fc4b0f4c1ded6d309662c135e9c51440f5fa"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.321867 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerStarted","Data":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.322070 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.328833 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84cacb3d_ec7c_4a92_a265_237ea9218b5e.slice/crio-7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1 WatchSource:0}: Error finding container 7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1: Status 404 returned error can't find the container with id 7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.329790 4829 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9v7jj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.329829 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.332107 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.333337 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.833324785 +0000 UTC m=+148.250342763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.352019 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" event={"ID":"546891ca-dff6-4af9-a495-8bdd561e4233","Type":"ContainerStarted","Data":"b5596fa44d35b6a6f32181ed865a4bf2d91d05fe27abf522bc55c567f046b272"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.358199 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2sdwc" event={"ID":"f73ce613-5317-4f8e-82c9-4af380ed614c","Type":"ContainerStarted","Data":"a4b024337416c36e86a222c63d908cb1882c0fb522fcc67f558830c3af29efc4"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.358253 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swcxx" podStartSLOduration=126.35823692 podStartE2EDuration="2m6.35823692s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.313464857 +0000 UTC m=+147.730482845" watchObservedRunningTime="2026-02-17 15:57:15.35823692 +0000 UTC m=+147.775254888" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.360032 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.361267 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.361320 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.374447 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"5379e190f94a5a1d87b2808fd8f701566d6284eb3d7358e29ff99bed1c660cfe"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.374499 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"36d29f8d0d3e061013a2cb72db7d3525140ace63a2a69976bb863cc588d702e3"} Feb 17 15:57:15 crc kubenswrapper[4829]: W0217 15:57:15.376933 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf1e080_f5b6_4360_a74f_5524ece2120c.slice/crio-2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475 WatchSource:0}: Error finding container 2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475: Status 404 returned error can't find the container with id 2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.388991 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"33ea74caf3f710efa1c50da2d1988bfc860823b5f31111d7825d4392f9477810"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.389042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" event={"ID":"e3f36b68-dd7a-41a7-86ff-ebcf90897710","Type":"ContainerStarted","Data":"2d792cc359e5b53472d99af40cfbdde690b7bafbe063e4836d6b92fafb28a982"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.390838 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8v8bb" podStartSLOduration=126.390825514 podStartE2EDuration="2m6.390825514s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.390635729 +0000 UTC m=+147.807653707" watchObservedRunningTime="2026-02-17 15:57:15.390825514 +0000 UTC m=+147.807843492" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.411220 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" event={"ID":"32e15283-b4a3-40c9-8117-77d662f30438","Type":"ContainerStarted","Data":"22e76edd041efcfbf0dc5da922bf8d7a594fd427c6fdb877f0b8cc65f1b3d66e"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.411261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" event={"ID":"32e15283-b4a3-40c9-8117-77d662f30438","Type":"ContainerStarted","Data":"2c5da868cc99fbe2010be26b7d97a29f7850e268d600cd5a93e76a54acb1dd40"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.422145 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" event={"ID":"76ca2091-de8d-469c-832b-057ee57bb8ee","Type":"ContainerStarted","Data":"f26fec56587359c05d33f39e5c5ae96141b78bae60e393505ecc55ab81229826"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.422189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" event={"ID":"76ca2091-de8d-469c-832b-057ee57bb8ee","Type":"ContainerStarted","Data":"3157323b0193585b7e7e8fb85389c6beed7bffc48855be8a7f3b2d4229fd2148"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.428539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5x4hf" event={"ID":"c67dea52-b0b7-4b48-80e1-54d9754487ed","Type":"ContainerStarted","Data":"be112181820fca68d7ecea086c2d913941f087334cb5af8e9f7c31bd83eae60c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.433610 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.435442 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:15.935430423 +0000 UTC m=+148.352448401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.454362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"b1a2ebf23b6275b9a2761e1c747235db3c3bb107da694df042aa8cf585a8d6ae"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.459373 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fq9th" podStartSLOduration=126.459359282 podStartE2EDuration="2m6.459359282s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.457334156 +0000 UTC m=+147.874352134" watchObservedRunningTime="2026-02-17 15:57:15.459359282 +0000 UTC m=+147.876377260" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.459744 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"eb17c60a5af48946ed43715152a9653aa398d60247fdda4bb18ad05bc4aa3658"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.464858 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerStarted","Data":"eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.464898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerStarted","Data":"dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.496110 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerStarted","Data":"8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.496614 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.497918 4829 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xn8fx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.497987 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.499303 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2l44d" podStartSLOduration=126.499290394 podStartE2EDuration="2m6.499290394s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.498144214 +0000 UTC m=+147.915162192" watchObservedRunningTime="2026-02-17 15:57:15.499290394 +0000 UTC m=+147.916308372" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.504497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"e87972fe228716c21ec7cecb1607e14e50dea5013a2a6768e543463984d2ebe1"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.506841 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" event={"ID":"34421a4c-a917-467e-938b-fe7e00cc76c4","Type":"ContainerStarted","Data":"178efd5fb7e07c92ea5f88e247dd25d64c95843ef475caa6ba3c9897df40ab0c"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.524669 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerStarted","Data":"054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531080 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerStarted","Data":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerStarted","Data":"7baa23e27dea651b430693897781e89b000dbe0f94cbc9c61bef0909c8c3ed1a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.531965 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.532782 4829 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8kmp8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.532833 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.538409 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" podStartSLOduration=126.538382274 podStartE2EDuration="2m6.538382274s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.536442032 +0000 UTC m=+147.953460010" watchObservedRunningTime="2026-02-17 15:57:15.538382274 +0000 UTC m=+147.955400252" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.540075 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.542260 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.042239249 +0000 UTC m=+148.459257227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.553311 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" event={"ID":"e8a98667-8884-4056-8577-3e7db8762ff9","Type":"ContainerStarted","Data":"f2fc0b2b1d8fdbbe3cc91226fa0a74a41e4544358631bc5af3ae12552a60853d"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.577342 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kb5nv" podStartSLOduration=128.57732458 podStartE2EDuration="2m8.57732458s" podCreationTimestamp="2026-02-17 15:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.574070202 +0000 UTC m=+147.991088180" watchObservedRunningTime="2026-02-17 15:57:15.57732458 +0000 UTC m=+147.994342558" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.583988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"ec7edf5ecebf89b444f3ce54ec59a1a67eb98262446dab1fd869ed6e92b9a7a7"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.593449 4829 generic.go:334] "Generic (PLEG): container finished" podID="c801e449-c529-4c10-a482-f6f3a8c24bb1" containerID="a68d67382eaa80ba8be14bf2537953dd5fa2811050d2a340647934a36708a69a" exitCode=0 Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.594262 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerDied","Data":"a68d67382eaa80ba8be14bf2537953dd5fa2811050d2a340647934a36708a69a"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.597077 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" event={"ID":"87a11950-91e2-4d36-9d60-341b9a6b21b2","Type":"ContainerStarted","Data":"2b5f2bc66bb84c30b8b1576c9c3ef131f6121f19ebcd7d6a3d0625f29b945082"} Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.610979 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fq9th" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.619044 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5x4hf" podStartSLOduration=5.619027581 podStartE2EDuration="5.619027581s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.617265863 +0000 UTC m=+148.034283841" watchObservedRunningTime="2026-02-17 15:57:15.619027581 +0000 UTC m=+148.036045559" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.644459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.644822 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.144809899 +0000 UTC m=+148.561827877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.697474 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2sdwc" podStartSLOduration=126.697456457 podStartE2EDuration="2m6.697456457s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.657257047 +0000 UTC m=+148.074275025" watchObservedRunningTime="2026-02-17 15:57:15.697456457 +0000 UTC m=+148.114474435" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.746048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.747752 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.24773766 +0000 UTC m=+148.664755638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.749850 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z29z2" podStartSLOduration=126.749834297 podStartE2EDuration="2m6.749834297s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.696282625 +0000 UTC m=+148.113300603" watchObservedRunningTime="2026-02-17 15:57:15.749834297 +0000 UTC m=+148.166852275" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.781273 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podStartSLOduration=126.781254569 podStartE2EDuration="2m6.781254569s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.750422823 +0000 UTC m=+148.167440801" watchObservedRunningTime="2026-02-17 15:57:15.781254569 +0000 UTC m=+148.198272547" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.842549 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6f6lw" podStartSLOduration=126.84253178 podStartE2EDuration="2m6.84253178s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.790295434 +0000 UTC m=+148.207313422" watchObservedRunningTime="2026-02-17 15:57:15.84253178 +0000 UTC m=+148.259549758" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.847808 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.848134 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.348120831 +0000 UTC m=+148.765138809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.849787 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podStartSLOduration=126.849765876 podStartE2EDuration="2m6.849765876s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.842856079 +0000 UTC m=+148.259874047" watchObservedRunningTime="2026-02-17 15:57:15.849765876 +0000 UTC m=+148.266783854" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.902980 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5m4j8" podStartSLOduration=127.902960069 podStartE2EDuration="2m7.902960069s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:15.902512796 +0000 UTC m=+148.319530774" watchObservedRunningTime="2026-02-17 15:57:15.902960069 +0000 UTC m=+148.319978047" Feb 17 15:57:15 crc kubenswrapper[4829]: I0217 15:57:15.956954 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:15 crc kubenswrapper[4829]: E0217 15:57:15.957507 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.457493307 +0000 UTC m=+148.874511285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.036234 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:16 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:16 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:16 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.036318 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.052622 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9fgb2" podStartSLOduration=127.052610236 podStartE2EDuration="2m7.052610236s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.05164807 +0000 UTC m=+148.468666048" watchObservedRunningTime="2026-02-17 15:57:16.052610236 +0000 UTC m=+148.469628204" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.052920 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podStartSLOduration=128.052915324 podStartE2EDuration="2m8.052915324s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.011014868 +0000 UTC m=+148.428032846" watchObservedRunningTime="2026-02-17 15:57:16.052915324 +0000 UTC m=+148.469933302" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.058616 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.059014 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.558998399 +0000 UTC m=+148.976016377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.062357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.077262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.095095 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-47kpc" podStartSLOduration=127.095076187 podStartE2EDuration="2m7.095076187s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.092264521 +0000 UTC m=+148.509282499" watchObservedRunningTime="2026-02-17 15:57:16.095076187 +0000 UTC m=+148.512094165" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.160301 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.160644 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.162692 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.662672009 +0000 UTC m=+149.079689987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.179564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.264961 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.265003 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.265286 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.765275691 +0000 UTC m=+149.182293669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.275538 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.306390 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.317867 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.367040 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.367335 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.867318687 +0000 UTC m=+149.284336665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.399866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.468227 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.468703 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:16.968689416 +0000 UTC m=+149.385707394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.572855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.573239 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.07322312 +0000 UTC m=+149.490241098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.657310 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"d8738063a9316455aa27c7b35c49c10c3172bf359b044237c42da3eef4744bbb"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.660886 4829 csr.go:261] certificate signing request csr-4tf5h is approved, waiting to be issued Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.669709 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" event={"ID":"4e417c4d-c6be-42e9-a72a-9021805d4f7c","Type":"ContainerStarted","Data":"3b329ae85fc93b1598d5d767e87ae2040624d8c6a4601992fd1b1d4b2dfcd1a6"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.670438 4829 csr.go:257] certificate signing request csr-4tf5h is issued Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.679158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.679464 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.17945242 +0000 UTC m=+149.596470398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.691453 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" podStartSLOduration=127.691434905 podStartE2EDuration="2m7.691434905s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.157022116 +0000 UTC m=+148.574040094" watchObservedRunningTime="2026-02-17 15:57:16.691434905 +0000 UTC m=+149.108452873" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.691668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" event={"ID":"9061d74f-5644-4fa3-8484-4bcf2508dbfa","Type":"ContainerStarted","Data":"9ccbfd6c5f7897c15d38c599d7fe0f7f6e15f334abcf0e6dc65f342f2870a50b"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.692128 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" event={"ID":"9061d74f-5644-4fa3-8484-4bcf2508dbfa","Type":"ContainerStarted","Data":"91a8b654ea6318c7bdcc2e777ebbf594c43059ccd19d43ee5e4dde06114f594c"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.703979 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" event={"ID":"d2f48424-451a-4a3a-a539-eb6ad78c8944","Type":"ContainerStarted","Data":"b6869cb48429f4a2ef61daf17cf98bf920d992f26c46ce5ea4849b674cde3857"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.704024 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" event={"ID":"d2f48424-451a-4a3a-a539-eb6ad78c8944","Type":"ContainerStarted","Data":"08cae84475e5d7689195c5c8153e01beb68dddb6bb3480c07a782359ee74fdf0"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.704394 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.719206 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720233 4829 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6c88x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720282 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" podUID="d2f48424-451a-4a3a-a539-eb6ad78c8944" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.720449 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.732881 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn4qs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.732976 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.733306 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"1547c84f8887a6fa0af7b373472743c48e33c86cdfc43407b10c3f869057f845"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.733411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"296b15f58aefe25542504c198fd08590a3c9a8311f17649af14853f17ffcd7e6"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.741101 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8wp4k" podStartSLOduration=127.741083671 podStartE2EDuration="2m7.741083671s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.726326361 +0000 UTC m=+149.143344339" watchObservedRunningTime="2026-02-17 15:57:16.741083671 +0000 UTC m=+149.158101639" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.741691 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cgntr" podStartSLOduration=127.741685718 podStartE2EDuration="2m7.741685718s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.691771864 +0000 UTC m=+149.108789852" watchObservedRunningTime="2026-02-17 15:57:16.741685718 +0000 UTC m=+149.158703696" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.746800 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" podStartSLOduration=127.746785645 podStartE2EDuration="2m7.746785645s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.744917865 +0000 UTC m=+149.161935843" watchObservedRunningTime="2026-02-17 15:57:16.746785645 +0000 UTC m=+149.163803623" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.756771 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"937557ef3533f2c8b77563c62228d4de2da5388be1edd73e57e3a29446cd648d"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.756807 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" event={"ID":"8264089d-eadc-4f77-9884-c162be2861fa","Type":"ContainerStarted","Data":"6723f685b755274b9a78fdb99273b071d30191d408a9e6244c59bbb0119f3a64"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.780279 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.780638 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.28054212 +0000 UTC m=+149.697560108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.780882 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.781292 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" event={"ID":"c5ad87cd-b97f-483a-825a-46c77bd5d5e0","Type":"ContainerStarted","Data":"b1e81ad1d4a0791c0992752500dff9bd438d1dd7f49591003e0b869a61c1b227"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.781751 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.782643 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.282631938 +0000 UTC m=+149.699649916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.799987 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podStartSLOduration=127.799965007 podStartE2EDuration="2m7.799965007s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.767725773 +0000 UTC m=+149.184743751" watchObservedRunningTime="2026-02-17 15:57:16.799965007 +0000 UTC m=+149.216982985" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.812416 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"633877f7f2aa0dcecf10c5c81b060f81e687b4e2737f8b112ab0b974acaf5016"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.812456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"7ee1fed18798ca34fddd9d160a09ba8c8b65cb5e86d5fb80dd0237d3cd2708f1"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.822498 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.837143 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" podStartSLOduration=128.837126675 podStartE2EDuration="2m8.837126675s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.836331223 +0000 UTC m=+149.253349201" watchObservedRunningTime="2026-02-17 15:57:16.837126675 +0000 UTC m=+149.254144653" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.837717 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m5kf7" podStartSLOduration=127.83771079 podStartE2EDuration="2m7.83771079s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.802342742 +0000 UTC m=+149.219360720" watchObservedRunningTime="2026-02-17 15:57:16.83771079 +0000 UTC m=+149.254728768" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.848892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" event={"ID":"2bfb2da7-1a85-42f9-8c3f-c7997e85dd58","Type":"ContainerStarted","Data":"a1910f24f6c6a7cacf9e979d638a329fe2c97f714685164f68b63982184a4981"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.881117 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" event={"ID":"34421a4c-a917-467e-938b-fe7e00cc76c4","Type":"ContainerStarted","Data":"cedd0fd2d5fccc9a02a98e40a59dca56e24aafb03b875f1fab3154761ba7c22f"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.881843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.885610 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.385593529 +0000 UTC m=+149.802611507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.885831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.891213 4829 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wj6cl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.891259 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" podUID="34421a4c-a917-467e-938b-fe7e00cc76c4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.903070 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" podStartSLOduration=127.903055802 podStartE2EDuration="2m7.903055802s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.874519889 +0000 UTC m=+149.291537867" watchObservedRunningTime="2026-02-17 15:57:16.903055802 +0000 UTC m=+149.320073780" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.922031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dmlvg" event={"ID":"9b45ddda-3269-494c-b1d6-c1219a8f61db","Type":"ContainerStarted","Data":"ae6cc45f69c55d7db389700e5c08416c5c60975747df576f7f2f35a74fa04782"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.922456 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dmlvg" event={"ID":"9b45ddda-3269-494c-b1d6-c1219a8f61db","Type":"ContainerStarted","Data":"cae98fe6706b1b7557a768201f72363f8d2f6b9548660e741144060d3fb2ebc8"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.925923 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sqmls" podStartSLOduration=127.925905262 podStartE2EDuration="2m7.925905262s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.907373899 +0000 UTC m=+149.324391867" watchObservedRunningTime="2026-02-17 15:57:16.925905262 +0000 UTC m=+149.342923240" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.958943 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" event={"ID":"d0af9147-4f17-470b-a49e-5a75ff9b5005","Type":"ContainerStarted","Data":"0b1f99fc51614f4b7fc9afa656921bbdfccae9d934a2bc74385cb3ce76dc2acb"} Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.973022 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" podStartSLOduration=127.973002429 podStartE2EDuration="2m7.973002429s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.928334028 +0000 UTC m=+149.345352006" watchObservedRunningTime="2026-02-17 15:57:16.973002429 +0000 UTC m=+149.390020417" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.973198 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dmlvg" podStartSLOduration=6.973193894 podStartE2EDuration="6.973193894s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.970352757 +0000 UTC m=+149.387370735" watchObservedRunningTime="2026-02-17 15:57:16.973193894 +0000 UTC m=+149.390211872" Feb 17 15:57:16 crc kubenswrapper[4829]: I0217 15:57:16.985909 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:16 crc kubenswrapper[4829]: E0217 15:57:16.994689 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.494673326 +0000 UTC m=+149.911691294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.015200 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:17 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:17 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:17 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.015248 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.049299 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" event={"ID":"1bf1e080-f5b6-4360-a74f-5524ece2120c","Type":"ContainerStarted","Data":"a512859ca31c760893fb4c1cc711494b226e8ad4c97534f217d50f1afaa5bc34"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.049338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" event={"ID":"1bf1e080-f5b6-4360-a74f-5524ece2120c","Type":"ContainerStarted","Data":"2df065f14ca13aeedd7d1f342224fd5efe2887aff8ced0634002f8246017e475"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.070669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-clr5s" podStartSLOduration=128.070650746 podStartE2EDuration="2m8.070650746s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.023126668 +0000 UTC m=+149.440144646" watchObservedRunningTime="2026-02-17 15:57:17.070650746 +0000 UTC m=+149.487668714" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076445 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"831941afd1b5f7e2e1478ae4342a5185c00290bf9f671a226ad456512c9727d8"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"3221ce37891012f56a8d7ec178ce30eb7e76a1d1c93de1b3e7f08982c8cb3e4a"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.076492 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" event={"ID":"708b9214-1619-4dff-a626-027ee223f939","Type":"ContainerStarted","Data":"d4adbdae49159dd3878f255eb9972440e57739daebf6bdd412077b66445ac73a"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.084524 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mkbhc" podStartSLOduration=128.084508502 podStartE2EDuration="2m8.084508502s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.082628461 +0000 UTC m=+149.499646439" watchObservedRunningTime="2026-02-17 15:57:17.084508502 +0000 UTC m=+149.501526480" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.088274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.089149 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.589134207 +0000 UTC m=+150.006152185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.091917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" event={"ID":"c801e449-c529-4c10-a482-f6f3a8c24bb1","Type":"ContainerStarted","Data":"726e77e8162f3984c39e705776a8363dd40b05c8f0057d8cd04ec0dc488a2857"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.093488 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" event={"ID":"d6a1e674-b813-4a95-b14e-a2774f390155","Type":"ContainerStarted","Data":"43af76111f13869523242abaecf7ef61a624193affdd0ef4088c5f9d75c04cb3"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.096165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" event={"ID":"5c008a05-c20f-4b78-b8f3-0ebb1ccf6569","Type":"ContainerStarted","Data":"d38b5a6ddeeb1117fd0f7d5af102725ff891c08f604cb48a5b370b61f04ec506"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.098308 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" event={"ID":"c0ad3e99-7312-4c48-bbfc-5355df896d20","Type":"ContainerStarted","Data":"befdfc9584a897dc19ead991a881040e7048710bc4a9c1f085df1c1c7fc95cae"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.098898 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.101408 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"481d865e25c664226e79682536a82dcc4bd81b19e0315cfcb10786ca946883f5"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.101429 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" event={"ID":"fffa6856-9b00-44e9-81c6-643defb47c04","Type":"ContainerStarted","Data":"3659c6fd523df2df7e164fe3b9b35230f92c34c59862e8684f6a8beee303f58f"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.102924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerStarted","Data":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.103517 4829 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xn8fx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.103542 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.110313 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-krtjv" podStartSLOduration=128.110301981 podStartE2EDuration="2m8.110301981s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.107983228 +0000 UTC m=+149.525001206" watchObservedRunningTime="2026-02-17 15:57:17.110301981 +0000 UTC m=+149.527319959" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.148896 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6pkfx" event={"ID":"87a11950-91e2-4d36-9d60-341b9a6b21b2","Type":"ContainerStarted","Data":"64f80805350a7166c111ca1105c4fc9581caebb1f5d00e83c7b51977866db4bd"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.151140 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xjtlq" podStartSLOduration=128.151131068 podStartE2EDuration="2m8.151131068s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.149644498 +0000 UTC m=+149.566662476" watchObservedRunningTime="2026-02-17 15:57:17.151131068 +0000 UTC m=+149.568149046" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.192663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.193888 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.693876937 +0000 UTC m=+150.110894915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.200978 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" event={"ID":"8bea1514-e813-4a49-80fb-cb8de9827a40","Type":"ContainerStarted","Data":"863eb000e928639403baef8d73809eaee49c1644f0f46b7f5ad5165d8ae72507"} Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.203138 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.203177 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.212981 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.214444 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2zdl6" podStartSLOduration=128.214433404 podStartE2EDuration="2m8.214433404s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.213345825 +0000 UTC m=+149.630363803" watchObservedRunningTime="2026-02-17 15:57:17.214433404 +0000 UTC m=+149.631451382" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.258433 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" podStartSLOduration=128.258419217 podStartE2EDuration="2m8.258419217s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.255117457 +0000 UTC m=+149.672135435" watchObservedRunningTime="2026-02-17 15:57:17.258419217 +0000 UTC m=+149.675437195" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.273413 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" podStartSLOduration=128.273396292 podStartE2EDuration="2m8.273396292s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.272820397 +0000 UTC m=+149.689838375" watchObservedRunningTime="2026-02-17 15:57:17.273396292 +0000 UTC m=+149.690414270" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.296498 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.298114 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.798088762 +0000 UTC m=+150.215106740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.299319 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.303893 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.803859408 +0000 UTC m=+150.220877386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.314888 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m79xc" podStartSLOduration=128.314863597 podStartE2EDuration="2m8.314863597s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.301038382 +0000 UTC m=+149.718056370" watchObservedRunningTime="2026-02-17 15:57:17.314863597 +0000 UTC m=+149.731881575" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.416070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.416344 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:17.916329927 +0000 UTC m=+150.333347905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.457090 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" podStartSLOduration=129.457074913 podStartE2EDuration="2m9.457074913s" podCreationTimestamp="2026-02-17 15:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:17.390839977 +0000 UTC m=+149.807857955" watchObservedRunningTime="2026-02-17 15:57:17.457074913 +0000 UTC m=+149.874092891" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.517230 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.517552 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.017540482 +0000 UTC m=+150.434558460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.542085 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.619380 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.619836 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.119817505 +0000 UTC m=+150.536835483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.672436 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 15:52:16 +0000 UTC, rotation deadline is 2026-12-14 20:15:18.757333616 +0000 UTC Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.672730 4829 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7204h18m1.084606306s for next certificate rotation Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.720955 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.721289 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.221278305 +0000 UTC m=+150.638296283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.736125 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.736652 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.793374 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.793423 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.821872 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.822237 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.322222502 +0000 UTC m=+150.739240470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:17 crc kubenswrapper[4829]: I0217 15:57:17.923026 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:17 crc kubenswrapper[4829]: E0217 15:57:17.923550 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.423539589 +0000 UTC m=+150.840557567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.000114 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:18 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:18 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:18 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.000160 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.024610 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.024886 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.524860996 +0000 UTC m=+150.941878974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.024982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.025271 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.525263536 +0000 UTC m=+150.942281514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.099709 4829 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hpnl2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.099774 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" podUID="c0ad3e99-7312-4c48-bbfc-5355df896d20" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.126104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.126258 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.626239115 +0000 UTC m=+151.043257093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.126451 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.126755 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.626744808 +0000 UTC m=+151.043762786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.207086 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" event={"ID":"26589ee7-3777-43d9-b378-df92780df986","Type":"ContainerStarted","Data":"d41b991d10b6766ee512d7aae8b46900e6cffbbf2648151a449eb6ad40c72622"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.208846 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pcvww" event={"ID":"b341af34-7b4a-4137-adc0-eb743588d455","Type":"ContainerStarted","Data":"65f946857047d98153311062180757a73e6eeddd287a4330d203ed29423d9e58"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.208953 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.210312 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" event={"ID":"84cacb3d-ec7c-4a92-a265-237ea9218b5e","Type":"ContainerStarted","Data":"9e14e1a03a60219c0ef53547850b97729227c7a6e1e17cc1d411ea1866f73cfe"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.211660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3a380a1770f0bb511732fcc1623a1e5479af7d675e765af40ac262b823836216"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.211704 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"43f022f41f64d1f1b764b9e81c31205378f11145f4f781121cd851f3b4fbcff0"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7b72de055bcc4f0a409c26a96620551e2a27114bd83ca51aeff554d64617b848"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212849 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"76e0b2b19feb9de939b6f44585b1cbf15e1d2194f62da4593d8290e18d6a5523"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.212991 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.214000 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"229143a17a645cb998990e9718ade2541120d9254779d36b0c5dcf21436b325f"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.214042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4e0b6e998edb4dcdc67fd15619330062fc752aba15ac979bd8b57b8d4bf05739"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.215180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"6a22613625cade5750324cad03dcbf97c046ca6d64eb183613ac0b204d9f1fcb"} Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.216956 4829 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zn4qs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.216993 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.225492 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.228081 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.228220 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.728201498 +0000 UTC m=+151.145219476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.228353 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.228677 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.728669402 +0000 UTC m=+151.145687380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.244083 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-pt2fg" podStartSLOduration=129.244067619 podStartE2EDuration="2m9.244067619s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:18.238819867 +0000 UTC m=+150.655837845" watchObservedRunningTime="2026-02-17 15:57:18.244067619 +0000 UTC m=+150.661085597" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.285220 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6c88x" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.329381 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.329480 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.829457654 +0000 UTC m=+151.246475632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.332170 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.335629 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.835617861 +0000 UTC m=+151.252635839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.359037 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pcvww" podStartSLOduration=8.359015796 podStartE2EDuration="8.359015796s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:18.357034961 +0000 UTC m=+150.774052939" watchObservedRunningTime="2026-02-17 15:57:18.359015796 +0000 UTC m=+150.776033774" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.378756 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wj6cl" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.432700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.433835 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:18.933813653 +0000 UTC m=+151.350831631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.537019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.537433 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.037421222 +0000 UTC m=+151.454439200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.638527 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.638812 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.138788181 +0000 UTC m=+151.555806159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.740007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.740503 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.240472087 +0000 UTC m=+151.657490065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.761043 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpnl2" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.844388 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.844597 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.344553269 +0000 UTC m=+151.761571247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.844755 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.845121 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.345114334 +0000 UTC m=+151.762132312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.865188 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.945466 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.945715 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.445689301 +0000 UTC m=+151.862707269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:18 crc kubenswrapper[4829]: I0217 15:57:18.945771 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:18 crc kubenswrapper[4829]: E0217 15:57:18.946201 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.446169594 +0000 UTC m=+151.863187572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.000709 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:19 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.000797 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.047327 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.047456 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.547435529 +0000 UTC m=+151.964453507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.047628 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.047921 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.547913713 +0000 UTC m=+151.964931691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.148968 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.149305 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.649289801 +0000 UTC m=+152.066307779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.209697 4829 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pdm8f container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]log ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]etcd ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 15:57:19 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-startinformers ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 15:57:19 crc kubenswrapper[4829]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:57:19 crc kubenswrapper[4829]: livez check failed Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.209755 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" podUID="8bea1514-e813-4a49-80fb-cb8de9827a40" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.234685 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"9ce86529239be6427a836aac4379fc901e154f11f0a7c8e81c6f33235f7e23cf"} Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.248843 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fbwnl" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.250185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.250690 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.75066384 +0000 UTC m=+152.167681818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.257330 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lbqc5" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.269717 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.350803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.351051 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.85102274 +0000 UTC m=+152.268040718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.351437 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.355659 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.855648996 +0000 UTC m=+152.272666964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.452558 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.452686 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.952660036 +0000 UTC m=+152.369678024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.452784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.453035 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:19.953022625 +0000 UTC m=+152.370040603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.554499 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.554747 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.054718353 +0000 UTC m=+152.471736331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.555143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.555500 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.055483953 +0000 UTC m=+152.472501931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.653306 4829 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.656348 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.656537 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.156509772 +0000 UTC m=+152.573527740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.656706 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.657068 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.157061557 +0000 UTC m=+152.574079535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.757317 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.757527 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.257504191 +0000 UTC m=+152.674522169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.757853 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.758119 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.258105937 +0000 UTC m=+152.675123915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.784214 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.785248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.787883 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.799767 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.859024 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.859174 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.359149776 +0000 UTC m=+152.776167754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.859234 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.859522 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.359509875 +0000 UTC m=+152.776527853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.960898 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.961067 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.461039379 +0000 UTC m=+152.878057357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961207 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961233 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961290 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.961316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:19 crc kubenswrapper[4829]: E0217 15:57:19.961674 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.461658835 +0000 UTC m=+152.878676813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.986207 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.987078 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:19 crc kubenswrapper[4829]: I0217 15:57:19.989098 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.000709 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:20 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:20 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:20 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.000760 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.008376 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.062834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.062989 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.562950682 +0000 UTC m=+152.979968660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063136 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.063520 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.563512147 +0000 UTC m=+152.980530125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.063684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.064043 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.109631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"certified-operators-z4qsx\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.163893 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.164069 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.664041702 +0000 UTC m=+153.081059680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.164413 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.164427 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.664413312 +0000 UTC m=+153.081431370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.179433 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.180291 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.189330 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.253231 4829 generic.go:334] "Generic (PLEG): container finished" podID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerID="eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896" exitCode=0 Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.253325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerDied","Data":"eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.255481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"35d90a2a6c53823db40d88de375ba86f18474c7e6fd718e0c4eb00068dfae0dd"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.255522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" event={"ID":"316979dc-a708-402a-94b0-d4d6bad3c7ca","Type":"ContainerStarted","Data":"d8908f9d0e1e550ade00fc370466c6ed9b445cf0b8ee93135fd47d046d41d94f"} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265746 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265927 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.265977 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.266353 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.266481 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.766467699 +0000 UTC m=+153.183485677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.266702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.280680 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.281224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.283323 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.284877 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.341299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"community-operators-plxhn\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.359814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.368639 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.369213 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.869202784 +0000 UTC m=+153.286220752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zht4j" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.389084 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.390010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.397879 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.407841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rrc2k" podStartSLOduration=10.407825212 podStartE2EDuration="10.407825212s" podCreationTimestamp="2026-02-17 15:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:20.405858539 +0000 UTC m=+152.822876517" watchObservedRunningTime="2026-02-17 15:57:20.407825212 +0000 UTC m=+152.824843190" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.412850 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.467698 4829 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T15:57:19.653334416Z","Handler":null,"Name":""} Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470163 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470454 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470585 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.470611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.471067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: E0217 15:57:20.471188 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:57:20.971144608 +0000 UTC m=+153.388162586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.471429 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.494676 4829 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.494721 4829 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.499630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"certified-operators-cd6xf\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572604 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572730 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.572817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.573187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.582603 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.582652 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.600275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.617181 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679392 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.679515 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.680332 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.680643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.682958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zht4j\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.715387 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"community-operators-pc95c\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.762985 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.780095 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.792677 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.796228 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.812831 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.904745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4829]: I0217 15:57:20.934296 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 15:57:20 crc kubenswrapper[4829]: W0217 15:57:20.959765 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a5cfa35_799d_41b4_afa1_e5d056ceed8c.slice/crio-528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4 WatchSource:0}: Error finding container 528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4: Status 404 returned error can't find the container with id 528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.002506 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:21 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:21 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:21 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.002556 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.006882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.186199 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.197893 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddd19c165_e47a_4b7f_aaf1_cd266eeb9cc1.slice/crio-337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a WatchSource:0}: Error finding container 337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a: Status 404 returned error can't find the container with id 337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.245892 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.249966 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958bc260_664c_466f_afd3_9a7ac9c119bf.slice/crio-e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf WatchSource:0}: Error finding container e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf: Status 404 returned error can't find the container with id e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262026 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262079 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.262103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.267318 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerStarted","Data":"337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.273687 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.274177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.284533 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.287478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289342 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289495 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f"} Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.289535 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"9f6b76db525ea1716f4c1ce5158f77a01ac87265be5d53578be8975ef1a1c0b8"} Feb 17 15:57:21 crc kubenswrapper[4829]: W0217 15:57:21.301354 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc817ced_7abe_422d_af13_779118b5fe0f.slice/crio-e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542 WatchSource:0}: Error finding container e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542: Status 404 returned error can't find the container with id e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542 Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.548423 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.693923 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.694018 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.694045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") pod \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\" (UID: \"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f\") " Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.695193 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.700226 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p" (OuterVolumeSpecName: "kube-api-access-rnj6p") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "kube-api-access-rnj6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.700601 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" (UID: "0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795112 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795488 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.795503 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnj6p\" (UniqueName: \"kubernetes.io/projected/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f-kube-api-access-rnj6p\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981034 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:21 crc kubenswrapper[4829]: E0217 15:57:21.981220 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981230 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.981323 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" containerName="collect-profiles" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.982024 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.984348 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:57:21 crc kubenswrapper[4829]: I0217 15:57:21.994507 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.003445 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:22 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:22 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:22 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.003501 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099275 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099667 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.099800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201120 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201155 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.201847 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.203589 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.220216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"redhat-marketplace-lg78k\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.307033 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.310469 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.336958 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerStarted","Data":"37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.337020 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerStarted","Data":"e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.337987 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.360997 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerID="89bc178927ed753306d120abe1c9fd96720b7ede9c5f70c06adb09dd17ed7ea0" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.361111 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerDied","Data":"89bc178927ed753306d120abe1c9fd96720b7ede9c5f70c06adb09dd17ed7ea0"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" event={"ID":"0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f","Type":"ContainerDied","Data":"dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365799 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadd85eb0210bc5e02b98e2cd0376b98664e5c4f3a7d87056cccace1188549ea" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.365928 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.369626 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.369683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.380031 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" podStartSLOduration=133.380016711 podStartE2EDuration="2m13.380016711s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:22.379527667 +0000 UTC m=+154.796545655" watchObservedRunningTime="2026-02-17 15:57:22.380016711 +0000 UTC m=+154.797034689" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383551 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383635 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.383665 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"5acc356c5d2ec47c5d87b88d2204b71dfd80af3eab05b77d8870f888eb4da2ab"} Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.402975 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.403900 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.413211 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.425329 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.425384 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.512455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.512931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.513001 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.604509 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.617430 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.619224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.620060 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.639238 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"redhat-marketplace-m5whh\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.739975 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.743426 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pdm8f" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.749949 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.979942 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.981340 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.990110 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.995427 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:22 crc kubenswrapper[4829]: I0217 15:57:22.997963 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.006524 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:23 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:23 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:23 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.006593 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134377 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.134443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.142766 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.142820 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.144175 4829 patch_prober.go:28] interesting pod/console-f9d7485db-9fgb2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.144208 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9fgb2" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178715 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178784 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178887 4829 patch_prober.go:28] interesting pod/downloads-7954f5f757-2sdwc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.178931 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2sdwc" podUID="f73ce613-5317-4f8e-82c9-4af380ed614c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.182944 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:57:23 crc kubenswrapper[4829]: W0217 15:57:23.194713 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43b8d950_926a_4dc1_82a3_be0e61618dff.slice/crio-e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259 WatchSource:0}: Error finding container e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259: Status 404 returned error can't find the container with id e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.238921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.239034 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.239083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.240038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.240363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.273505 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"redhat-operators-pzvbr\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.309813 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.394543 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.396100 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.440212 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529094 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" exitCode=0 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.529239 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerStarted","Data":"d19f6da1913041c5fd10e98efa71ae0ed6c2d8facfc11c2aa17840a88a15c77f"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.533190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerStarted","Data":"e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259"} Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.554382 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655679 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.655768 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.657516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.658840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.682554 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"redhat-operators-8fpmz\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.759092 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.768551 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 15:57:23 crc kubenswrapper[4829]: W0217 15:57:23.814043 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8370c4f_c05e_425c_a267_c270e36b5dfd.slice/crio-d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4 WatchSource:0}: Error finding container d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4: Status 404 returned error can't find the container with id d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4 Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.843992 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.845780 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.852789 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.852800 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.858897 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.898828 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.960293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:23 crc kubenswrapper[4829]: I0217 15:57:23.960339 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.012641 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:24 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:24 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:24 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.012961 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.068667 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") pod \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.068803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") pod \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\" (UID: \"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1\") " Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.069026 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.069053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.071048 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" (UID: "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.071173 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.074795 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" (UID: "dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.084513 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.175209 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.175245 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.198730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.363415 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:57:24 crc kubenswrapper[4829]: W0217 15:57:24.405476 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dfe32e4_aee9_408a_9b01_4ab9f4da515f.slice/crio-f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75 WatchSource:0}: Error finding container f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75: Status 404 returned error can't find the container with id f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1","Type":"ContainerDied","Data":"337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565442 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337014378fdbc081a7f8641c15c6feef1e828c63d5df5d4de941104bc4ec3b4a" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.565506 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569567 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" exitCode=0 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569614 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.569668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.573686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.577023 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e" exitCode=0 Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.577735 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e"} Feb 17 15:57:24 crc kubenswrapper[4829]: I0217 15:57:24.692757 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:57:24 crc kubenswrapper[4829]: W0217 15:57:24.727669 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podee0fd92e_e4d2_4523_97bd_58e10e78bc41.slice/crio-dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d WatchSource:0}: Error finding container dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d: Status 404 returned error can't find the container with id dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.000850 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:25 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:25 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:25 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.000914 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.587036 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.587434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.590617 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerStarted","Data":"99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.590658 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerStarted","Data":"dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d"} Feb 17 15:57:25 crc kubenswrapper[4829]: I0217 15:57:25.641118 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.641102733 podStartE2EDuration="2.641102733s" podCreationTimestamp="2026-02-17 15:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:25.640175958 +0000 UTC m=+158.057193936" watchObservedRunningTime="2026-02-17 15:57:25.641102733 +0000 UTC m=+158.058120711" Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.000989 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:26 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:26 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:26 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.001043 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.638320 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerID="99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93" exitCode=0 Feb 17 15:57:26 crc kubenswrapper[4829]: I0217 15:57:26.638707 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerDied","Data":"99a7bc665044a59acf42754a00f604b43cc5b6460474ef87ae5534f9eed96d93"} Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.000011 4829 patch_prober.go:28] interesting pod/router-default-5444994796-5rwbn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:57:27 crc kubenswrapper[4829]: [-]has-synced failed: reason withheld Feb 17 15:57:27 crc kubenswrapper[4829]: [+]process-running ok Feb 17 15:57:27 crc kubenswrapper[4829]: healthz check failed Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.000109 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5rwbn" podUID="a5a717f8-3264-4540-b132-ab42accb57f0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:57:27 crc kubenswrapper[4829]: I0217 15:57:27.979525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.001555 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.008704 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5rwbn" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.074830 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") pod \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.074874 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") pod \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\" (UID: \"ee0fd92e-e4d2-4523-97bd-58e10e78bc41\") " Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.075909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee0fd92e-e4d2-4523-97bd-58e10e78bc41" (UID: "ee0fd92e-e4d2-4523-97bd-58e10e78bc41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.081487 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee0fd92e-e4d2-4523-97bd-58e10e78bc41" (UID: "ee0fd92e-e4d2-4523-97bd-58e10e78bc41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.175949 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.175980 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee0fd92e-e4d2-4523-97bd-58e10e78bc41-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.495993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pcvww" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684516 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ee0fd92e-e4d2-4523-97bd-58e10e78bc41","Type":"ContainerDied","Data":"dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d"} Feb 17 15:57:28 crc kubenswrapper[4829]: I0217 15:57:28.684640 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd10ba1f0d8442dbece81b8f1af675a906424dd3a60fb5c1e5f1e70ed11314d" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.327810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.333944 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c29406b-a65e-4386-8f7c-ac9dc76fb4cb-metrics-certs\") pod \"network-metrics-daemon-xdb29\" (UID: \"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb\") " pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:32 crc kubenswrapper[4829]: I0217 15:57:32.608325 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xdb29" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.184961 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2sdwc" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.272904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:33 crc kubenswrapper[4829]: I0217 15:57:33.280562 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 15:57:40 crc kubenswrapper[4829]: I0217 15:57:40.773183 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.908044 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.908672 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bzhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pc95c_openshift-marketplace(958bc260-664c-466f-afd3-9a7ac9c119bf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:51 crc kubenswrapper[4829]: E0217 15:57:51.909948 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" Feb 17 15:57:52 crc kubenswrapper[4829]: I0217 15:57:52.424393 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:52 crc kubenswrapper[4829]: I0217 15:57:52.424442 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.200186 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.284140 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.284281 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-429d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cd6xf_openshift-marketplace(8d559324-3a7f-41a3-9229-b2b96294faad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.286248 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.739106 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cgktd" Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.743283 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xdb29"] Feb 17 15:57:53 crc kubenswrapper[4829]: W0217 15:57:53.758395 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c29406b_a65e_4386_8f7c_ac9dc76fb4cb.slice/crio-bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e WatchSource:0}: Error finding container bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e: Status 404 returned error can't find the container with id bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.827283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.845642 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692" exitCode=0 Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.845760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.867960 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" exitCode=0 Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.868279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.883903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.889371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.891531 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"bf42f3175fbf349570fd73d8604bf9549c4bfca388ff2a2932cb0e5ce380470e"} Feb 17 15:57:53 crc kubenswrapper[4829]: I0217 15:57:53.895135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d"} Feb 17 15:57:53 crc kubenswrapper[4829]: E0217 15:57:53.900356 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.900643 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.900850 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.903778 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.903843 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.913244 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"128e5311b92e0ff5adac5b190ca185777df7094e564e6e77f54a20afef790025"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.915156 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d" exitCode=0 Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.915327 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.917688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} Feb 17 15:57:54 crc kubenswrapper[4829]: I0217 15:57:54.917529 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" exitCode=0 Feb 17 15:57:55 crc kubenswrapper[4829]: I0217 15:57:55.946272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xdb29" event={"ID":"9c29406b-a65e-4386-8f7c-ac9dc76fb4cb","Type":"ContainerStarted","Data":"ac628cf13344886cb954b95a68ba728d2c1763eba31ef74a7471eb425d7f3b99"} Feb 17 15:57:55 crc kubenswrapper[4829]: I0217 15:57:55.966104 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xdb29" podStartSLOduration=166.966089207 podStartE2EDuration="2m46.966089207s" podCreationTimestamp="2026-02-17 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:55.96547636 +0000 UTC m=+188.382494348" watchObservedRunningTime="2026-02-17 15:57:55.966089207 +0000 UTC m=+188.383107185" Feb 17 15:57:56 crc kubenswrapper[4829]: I0217 15:57:56.406104 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.959266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerStarted","Data":"9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425"} Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.961894 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerStarted","Data":"22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f"} Feb 17 15:57:57 crc kubenswrapper[4829]: I0217 15:57:57.977106 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5whh" podStartSLOduration=4.620229058 podStartE2EDuration="35.977093258s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="2026-02-17 15:57:24.579325126 +0000 UTC m=+156.996343104" lastFinishedPulling="2026-02-17 15:57:55.936189326 +0000 UTC m=+188.353207304" observedRunningTime="2026-02-17 15:57:57.975863255 +0000 UTC m=+190.392881253" watchObservedRunningTime="2026-02-17 15:57:57.977093258 +0000 UTC m=+190.394111236" Feb 17 15:57:58 crc kubenswrapper[4829]: I0217 15:57:58.985768 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plxhn" podStartSLOduration=4.431701224 podStartE2EDuration="39.985743554s" podCreationTimestamp="2026-02-17 15:57:19 +0000 UTC" firstStartedPulling="2026-02-17 15:57:21.273380318 +0000 UTC m=+153.690398286" lastFinishedPulling="2026-02-17 15:57:56.827422618 +0000 UTC m=+189.244440616" observedRunningTime="2026-02-17 15:57:58.984763187 +0000 UTC m=+191.401781175" watchObservedRunningTime="2026-02-17 15:57:58.985743554 +0000 UTC m=+191.402761562" Feb 17 15:57:59 crc kubenswrapper[4829]: I0217 15:57:59.986885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerStarted","Data":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.004317 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lg78k" podStartSLOduration=3.630089022 podStartE2EDuration="39.004301408s" podCreationTimestamp="2026-02-17 15:57:21 +0000 UTC" firstStartedPulling="2026-02-17 15:57:23.532128825 +0000 UTC m=+155.949146803" lastFinishedPulling="2026-02-17 15:57:58.906341221 +0000 UTC m=+191.323359189" observedRunningTime="2026-02-17 15:58:00.002696915 +0000 UTC m=+192.419714883" watchObservedRunningTime="2026-02-17 15:58:00.004301408 +0000 UTC m=+192.421319386" Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.600531 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:00 crc kubenswrapper[4829]: I0217 15:58:00.600866 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436026 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:01 crc kubenswrapper[4829]: E0217 15:58:01.436409 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436420 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: E0217 15:58:01.436431 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436437 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436547 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0fd92e-e4d2-4523-97bd-58e10e78bc41" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436561 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd19c165-e47a-4b7f-aaf1-cd266eeb9cc1" containerName="pruner" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.436926 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.439110 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.439670 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.458187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.525408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.525545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626750 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.626921 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.649160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:01 crc kubenswrapper[4829]: I0217 15:58:01.751982 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.004800 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerStarted","Data":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.007323 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerStarted","Data":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.009093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerStarted","Data":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.030005 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pzvbr" podStartSLOduration=3.4391612289999998 podStartE2EDuration="40.029991438s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="2026-02-17 15:57:24.572603275 +0000 UTC m=+156.989621253" lastFinishedPulling="2026-02-17 15:58:01.163433484 +0000 UTC m=+193.580451462" observedRunningTime="2026-02-17 15:58:02.029592947 +0000 UTC m=+194.446610925" watchObservedRunningTime="2026-02-17 15:58:02.029991438 +0000 UTC m=+194.447009416" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.031962 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-plxhn" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:02 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:02 crc kubenswrapper[4829]: > Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.046030 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4qsx" podStartSLOduration=3.264362316 podStartE2EDuration="43.046013243s" podCreationTimestamp="2026-02-17 15:57:19 +0000 UTC" firstStartedPulling="2026-02-17 15:57:21.296765962 +0000 UTC m=+153.713783940" lastFinishedPulling="2026-02-17 15:58:01.078416889 +0000 UTC m=+193.495434867" observedRunningTime="2026-02-17 15:58:02.043533895 +0000 UTC m=+194.460551873" watchObservedRunningTime="2026-02-17 15:58:02.046013243 +0000 UTC m=+194.463031221" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.062969 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fpmz" podStartSLOduration=3.9366251979999998 podStartE2EDuration="39.062953392s" podCreationTimestamp="2026-02-17 15:57:23 +0000 UTC" firstStartedPulling="2026-02-17 15:57:25.589479724 +0000 UTC m=+158.006497702" lastFinishedPulling="2026-02-17 15:58:00.715807918 +0000 UTC m=+193.132825896" observedRunningTime="2026-02-17 15:58:02.060182527 +0000 UTC m=+194.477200505" watchObservedRunningTime="2026-02-17 15:58:02.062953392 +0000 UTC m=+194.479971370" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.196391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:58:02 crc kubenswrapper[4829]: W0217 15:58:02.206908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8581faef_5460_4e6b_8102_ba36b8a2c6b6.slice/crio-614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444 WatchSource:0}: Error finding container 614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444: Status 404 returned error can't find the container with id 614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444 Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.308900 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.308945 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.750627 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.751440 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:02 crc kubenswrapper[4829]: I0217 15:58:02.797792 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.016826 4829 generic.go:334] "Generic (PLEG): container finished" podID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerID="a95346330ded3294f170b3b328f3cf8dcf6cfbd212834348dcf817d1dbf1a33c" exitCode=0 Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.017420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerDied","Data":"a95346330ded3294f170b3b328f3cf8dcf6cfbd212834348dcf817d1dbf1a33c"} Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.017450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerStarted","Data":"614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444"} Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.063978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.310586 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.310850 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.375980 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lg78k" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:03 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:03 crc kubenswrapper[4829]: > Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.760505 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:03 crc kubenswrapper[4829]: I0217 15:58:03.760540 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.301679 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.345180 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pzvbr" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:04 crc kubenswrapper[4829]: > Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.370864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") pod \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.370919 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") pod \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\" (UID: \"8581faef-5460-4e6b-8102-ba36b8a2c6b6\") " Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.371160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8581faef-5460-4e6b-8102-ba36b8a2c6b6" (UID: "8581faef-5460-4e6b-8102-ba36b8a2c6b6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.375771 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8581faef-5460-4e6b-8102-ba36b8a2c6b6" (UID: "8581faef-5460-4e6b-8102-ba36b8a2c6b6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.472609 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.472642 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8581faef-5460-4e6b-8102-ba36b8a2c6b6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4829]: I0217 15:58:04.792646 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fpmz" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" probeResult="failure" output=< Feb 17 15:58:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 15:58:04 crc kubenswrapper[4829]: > Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.028339 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.029280 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8581faef-5460-4e6b-8102-ba36b8a2c6b6","Type":"ContainerDied","Data":"614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444"} Feb 17 15:58:05 crc kubenswrapper[4829]: I0217 15:58:05.029325 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614cae1ef7fec9a11e370e504fec1324255f96ba0ea25af753b8a465394b6444" Feb 17 15:58:06 crc kubenswrapper[4829]: I0217 15:58:06.401938 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:06 crc kubenswrapper[4829]: I0217 15:58:06.402173 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5whh" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" containerID="cri-o://22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" gracePeriod=2 Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.047934 4829 generic.go:334] "Generic (PLEG): container finished" podID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerID="22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" exitCode=0 Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.048338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f"} Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.188830 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.322826 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.322959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.323066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") pod \"43b8d950-926a-4dc1-82a3-be0e61618dff\" (UID: \"43b8d950-926a-4dc1-82a3-be0e61618dff\") " Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.324643 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities" (OuterVolumeSpecName: "utilities") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.328008 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk" (OuterVolumeSpecName: "kube-api-access-jsznk") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "kube-api-access-jsznk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.345270 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b8d950-926a-4dc1-82a3-be0e61618dff" (UID: "43b8d950-926a-4dc1-82a3-be0e61618dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424438 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424470 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsznk\" (UniqueName: \"kubernetes.io/projected/43b8d950-926a-4dc1-82a3-be0e61618dff-kube-api-access-jsznk\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:08 crc kubenswrapper[4829]: I0217 15:58:08.424480 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b8d950-926a-4dc1-82a3-be0e61618dff-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.037996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038373 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038422 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-utilities" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038436 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-utilities" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038452 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038465 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: E0217 15:58:09.038490 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-content" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038502 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="extract-content" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038699 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" containerName="registry-server" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.038739 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8581faef-5460-4e6b-8102-ba36b8a2c6b6" containerName="pruner" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.039297 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.042017 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.042088 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.047636 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067136 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5whh" event={"ID":"43b8d950-926a-4dc1-82a3-be0e61618dff","Type":"ContainerDied","Data":"e9f43846d96bca0182b399c0dc0b711cb4690086566cd841399971665515f259"} Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067204 4829 scope.go:117] "RemoveContainer" containerID="22c0dc64dab6287df84510379a4eb9b083da7cf3227f1b647eb72ef45eb1e07f" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.067263 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5whh" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.086656 4829 scope.go:117] "RemoveContainer" containerID="b6d8fd12049dc4754bea764b8684c4bb1573932e49243d426503b8b0ddf79692" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.107008 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.111006 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5whh"] Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.123760 4829 scope.go:117] "RemoveContainer" containerID="8fa7bb0482a10d017f1f057139c3a8927fdd26933310b5ad6bf197951349cf1e" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143247 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143346 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.143507 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247441 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.247887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.300433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"installer-9-crc\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.363666 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:09 crc kubenswrapper[4829]: I0217 15:58:09.764777 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.074918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerStarted","Data":"fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f"} Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.287525 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b8d950-926a-4dc1-82a3-be0e61618dff" path="/var/lib/kubelet/pods/43b8d950-926a-4dc1-82a3-be0e61618dff/volumes" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.398594 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.398921 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.459657 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.679980 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:10 crc kubenswrapper[4829]: I0217 15:58:10.730441 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plxhn" Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.079665 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerStarted","Data":"02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.082460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.084856 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.098006 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.097988063 podStartE2EDuration="2.097988063s" podCreationTimestamp="2026-02-17 15:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:11.096009518 +0000 UTC m=+203.513027506" watchObservedRunningTime="2026-02-17 15:58:11.097988063 +0000 UTC m=+203.515006041" Feb 17 15:58:11 crc kubenswrapper[4829]: I0217 15:58:11.136148 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.091817 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.091930 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.098459 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.099480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.382064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:12 crc kubenswrapper[4829]: I0217 15:58:12.444233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.106255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerStarted","Data":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.111972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerStarted","Data":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.137432 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cd6xf" podStartSLOduration=2.986294729 podStartE2EDuration="53.137410239s" podCreationTimestamp="2026-02-17 15:57:20 +0000 UTC" firstStartedPulling="2026-02-17 15:57:22.39991112 +0000 UTC m=+154.816929098" lastFinishedPulling="2026-02-17 15:58:12.55102663 +0000 UTC m=+204.968044608" observedRunningTime="2026-02-17 15:58:13.131842306 +0000 UTC m=+205.548860314" watchObservedRunningTime="2026-02-17 15:58:13.137410239 +0000 UTC m=+205.554428237" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.153170 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pc95c" podStartSLOduration=2.967500191 podStartE2EDuration="53.153147012s" podCreationTimestamp="2026-02-17 15:57:20 +0000 UTC" firstStartedPulling="2026-02-17 15:57:22.37667649 +0000 UTC m=+154.793694468" lastFinishedPulling="2026-02-17 15:58:12.562323311 +0000 UTC m=+204.979341289" observedRunningTime="2026-02-17 15:58:13.148068963 +0000 UTC m=+205.565086971" watchObservedRunningTime="2026-02-17 15:58:13.153147012 +0000 UTC m=+205.570165020" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.350127 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.401295 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.808949 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:13 crc kubenswrapper[4829]: I0217 15:58:13.865793 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:17 crc kubenswrapper[4829]: I0217 15:58:17.603461 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:17 crc kubenswrapper[4829]: I0217 15:58:17.604109 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fpmz" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" containerID="cri-o://629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" gracePeriod=2 Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.012962 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057446 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057675 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.057781 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") pod \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\" (UID: \"0dfe32e4-aee9-408a-9b01-4ab9f4da515f\") " Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.058931 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities" (OuterVolumeSpecName: "utilities") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.068816 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8" (OuterVolumeSpecName: "kube-api-access-5zjb8") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "kube-api-access-5zjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.149957 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" exitCode=0 Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fpmz" event={"ID":"0dfe32e4-aee9-408a-9b01-4ab9f4da515f","Type":"ContainerDied","Data":"f3fa14125f9325734a8ca74ec00fb7c771325f80130e76977a80fdd8d57f7c75"} Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150169 4829 scope.go:117] "RemoveContainer" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.150079 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fpmz" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.159137 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.159159 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjb8\" (UniqueName: \"kubernetes.io/projected/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-kube-api-access-5zjb8\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.169233 4829 scope.go:117] "RemoveContainer" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.205296 4829 scope.go:117] "RemoveContainer" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.224035 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dfe32e4-aee9-408a-9b01-4ab9f4da515f" (UID: "0dfe32e4-aee9-408a-9b01-4ab9f4da515f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.225494 4829 scope.go:117] "RemoveContainer" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.226868 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": container with ID starting with 629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1 not found: ID does not exist" containerID="629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.227197 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1"} err="failed to get container status \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": rpc error: code = NotFound desc = could not find container \"629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1\": container with ID starting with 629b563e532aeb5e86767020d7a4143276a64801e164c0260c53614a5cd8eaf1 not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.227320 4829 scope.go:117] "RemoveContainer" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.227931 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": container with ID starting with 98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5 not found: ID does not exist" containerID="98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228009 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5"} err="failed to get container status \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": rpc error: code = NotFound desc = could not find container \"98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5\": container with ID starting with 98fb7bb054317e578c5338ebae01bef17777c07cda3c564624c92db1ec4d88a5 not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228044 4829 scope.go:117] "RemoveContainer" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: E0217 15:58:18.228444 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": container with ID starting with f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd not found: ID does not exist" containerID="f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.228633 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd"} err="failed to get container status \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": rpc error: code = NotFound desc = could not find container \"f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd\": container with ID starting with f53ebcf20125657a7556659533a7b01611682c4c616aa8e0d7f002bfbbb95dcd not found: ID does not exist" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.260737 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dfe32e4-aee9-408a-9b01-4ab9f4da515f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.474373 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:18 crc kubenswrapper[4829]: I0217 15:58:18.477967 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fpmz"] Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.293900 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" path="/var/lib/kubelet/pods/0dfe32e4-aee9-408a-9b01-4ab9f4da515f/volumes" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.796866 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.797327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:20 crc kubenswrapper[4829]: I0217 15:58:20.862820 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.007957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.008393 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.068842 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.244033 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.247805 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:21 crc kubenswrapper[4829]: I0217 15:58:21.808145 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.424981 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425075 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425140 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.425964 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.426069 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" gracePeriod=600 Feb 17 15:58:22 crc kubenswrapper[4829]: I0217 15:58:22.805634 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185430 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" exitCode=0 Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f"} Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.185690 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cd6xf" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" containerID="cri-o://77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" gracePeriod=2 Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.521256 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631070 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.631185 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") pod \"8d559324-3a7f-41a3-9229-b2b96294faad\" (UID: \"8d559324-3a7f-41a3-9229-b2b96294faad\") " Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.632084 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities" (OuterVolumeSpecName: "utilities") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.647345 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6" (OuterVolumeSpecName: "kube-api-access-429d6") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "kube-api-access-429d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.683232 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d559324-3a7f-41a3-9229-b2b96294faad" (UID: "8d559324-3a7f-41a3-9229-b2b96294faad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732314 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732374 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-429d6\" (UniqueName: \"kubernetes.io/projected/8d559324-3a7f-41a3-9229-b2b96294faad-kube-api-access-429d6\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:23 crc kubenswrapper[4829]: I0217 15:58:23.732390 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d559324-3a7f-41a3-9229-b2b96294faad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.194906 4829 generic.go:334] "Generic (PLEG): container finished" podID="8d559324-3a7f-41a3-9229-b2b96294faad" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" exitCode=0 Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195026 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cd6xf" event={"ID":"8d559324-3a7f-41a3-9229-b2b96294faad","Type":"ContainerDied","Data":"5acc356c5d2ec47c5d87b88d2204b71dfd80af3eab05b77d8870f888eb4da2ab"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195040 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cd6xf" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.195107 4829 scope.go:117] "RemoveContainer" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.198687 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.199109 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pc95c" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" containerID="cri-o://311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" gracePeriod=2 Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.231791 4829 scope.go:117] "RemoveContainer" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.258850 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.266199 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cd6xf"] Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.284991 4829 scope.go:117] "RemoveContainer" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.288261 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" path="/var/lib/kubelet/pods/8d559324-3a7f-41a3-9229-b2b96294faad/volumes" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.304516 4829 scope.go:117] "RemoveContainer" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.305136 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": container with ID starting with 77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e not found: ID does not exist" containerID="77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305186 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e"} err="failed to get container status \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": rpc error: code = NotFound desc = could not find container \"77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e\": container with ID starting with 77eee9ebce0ef9387fd70d6f8e0394fa8891dfa064db96ba321fd9c05314607e not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305221 4829 scope.go:117] "RemoveContainer" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.305829 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": container with ID starting with 63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210 not found: ID does not exist" containerID="63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305873 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210"} err="failed to get container status \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": rpc error: code = NotFound desc = could not find container \"63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210\": container with ID starting with 63e4759597d3d91fbcd57b310977b832e3b323251ec997f534f9617c8b258210 not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.305892 4829 scope.go:117] "RemoveContainer" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: E0217 15:58:24.306399 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": container with ID starting with d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef not found: ID does not exist" containerID="d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.306649 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef"} err="failed to get container status \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": rpc error: code = NotFound desc = could not find container \"d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef\": container with ID starting with d53b627193da9fed79f6ee3baaa43224d43e684dc585baaa96d41259780613ef not found: ID does not exist" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.571824 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646492 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646633 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.646660 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") pod \"958bc260-664c-466f-afd3-9a7ac9c119bf\" (UID: \"958bc260-664c-466f-afd3-9a7ac9c119bf\") " Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.647781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities" (OuterVolumeSpecName: "utilities") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.652090 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg" (OuterVolumeSpecName: "kube-api-access-5bzhg") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "kube-api-access-5bzhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.720186 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "958bc260-664c-466f-afd3-9a7ac9c119bf" (UID: "958bc260-664c-466f-afd3-9a7ac9c119bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748358 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748413 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bzhg\" (UniqueName: \"kubernetes.io/projected/958bc260-664c-466f-afd3-9a7ac9c119bf-kube-api-access-5bzhg\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:24 crc kubenswrapper[4829]: I0217 15:58:24.748428 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/958bc260-664c-466f-afd3-9a7ac9c119bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225203 4829 generic.go:334] "Generic (PLEG): container finished" podID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" exitCode=0 Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225346 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225387 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc95c" event={"ID":"958bc260-664c-466f-afd3-9a7ac9c119bf","Type":"ContainerDied","Data":"e732c949ffe37772c10e0db507c9efe9df2cd2fcc8a5827d3621cb8e0059e5bf"} Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225403 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc95c" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.225417 4829 scope.go:117] "RemoveContainer" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.257435 4829 scope.go:117] "RemoveContainer" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.288465 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.297345 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pc95c"] Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.297881 4829 scope.go:117] "RemoveContainer" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.317468 4829 scope.go:117] "RemoveContainer" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.318317 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": container with ID starting with 311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344 not found: ID does not exist" containerID="311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.318407 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344"} err="failed to get container status \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": rpc error: code = NotFound desc = could not find container \"311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344\": container with ID starting with 311df6309c148717273c5164c438b2f3bcf3f47e9566a99406f77c9c52e86344 not found: ID does not exist" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.318464 4829 scope.go:117] "RemoveContainer" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.319292 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": container with ID starting with 20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58 not found: ID does not exist" containerID="20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.319351 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58"} err="failed to get container status \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": rpc error: code = NotFound desc = could not find container \"20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58\": container with ID starting with 20da3f826e4078760e7c90e52552c3db25a3ba1ba7c22d5fe86fae11213a6e58 not found: ID does not exist" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.319391 4829 scope.go:117] "RemoveContainer" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: E0217 15:58:25.320201 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": container with ID starting with b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045 not found: ID does not exist" containerID="b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045" Feb 17 15:58:25 crc kubenswrapper[4829]: I0217 15:58:25.320447 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045"} err="failed to get container status \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": rpc error: code = NotFound desc = could not find container \"b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045\": container with ID starting with b03474c905e8224a7c50e6ddcb5597fbb3fd02941e2e5d85a30fe9db2a3bc045 not found: ID does not exist" Feb 17 15:58:26 crc kubenswrapper[4829]: I0217 15:58:26.292859 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" path="/var/lib/kubelet/pods/958bc260-664c-466f-afd3-9a7ac9c119bf/volumes" Feb 17 15:58:32 crc kubenswrapper[4829]: I0217 15:58:32.376316 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.872483 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.874649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.874947 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875074 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875192 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875312 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875439 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875563 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875722 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.875846 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.875969 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876107 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876231 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876360 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876485 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="extract-utilities" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876640 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.876774 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.876895 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877014 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="extract-content" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877305 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d559324-3a7f-41a3-9229-b2b96294faad" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877444 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfe32e4-aee9-408a-9b01-4ab9f4da515f" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.877567 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="958bc260-664c-466f-afd3-9a7ac9c119bf" containerName="registry-server" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878204 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878400 4829 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878322 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878648 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879002 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879046 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879065 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879079 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878694 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879102 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879115 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879135 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879147 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879169 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879182 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879204 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879216 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: E0217 15:58:47.879238 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879250 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879407 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879426 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879445 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879470 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879485 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.879511 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878708 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878721 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.878735 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" gracePeriod=15 Feb 17 15:58:47 crc kubenswrapper[4829]: I0217 15:58:47.883317 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075358 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075393 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075475 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075508 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075533 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075556 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.075583 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.084388 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.084825 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085211 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085741 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.085976 4829 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.086011 4829 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.086318 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176642 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176679 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176801 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176814 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176971 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.176996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177005 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177045 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.177193 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.286946 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.381128 4829 generic.go:334] "Generic (PLEG): container finished" podID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerID="02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.381217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerDied","Data":"02a02cdd75f89212de8fb224308fa08c1d499a66c420d437283807d6e108f351"} Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.382236 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.384240 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.385432 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386463 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386492 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386505 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" exitCode=0 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386514 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" exitCode=2 Feb 17 15:58:48 crc kubenswrapper[4829]: I0217 15:58:48.386598 4829 scope.go:117] "RemoveContainer" containerID="ef97ba6ae7292223f1bacc8d05ac28ff4e407b379b89e5f662b7db4466ad4208" Feb 17 15:58:48 crc kubenswrapper[4829]: E0217 15:58:48.688555 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.396362 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:49 crc kubenswrapper[4829]: E0217 15:58:49.489749 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.647366 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.648368 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") pod \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\" (UID: \"9faa2a78-6c08-44c4-a11d-b978b08cac9d\") " Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796598 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.796794 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock" (OuterVolumeSpecName: "var-lock") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.797251 4829 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.797291 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9faa2a78-6c08-44c4-a11d-b978b08cac9d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.804609 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9faa2a78-6c08-44c4-a11d-b978b08cac9d" (UID: "9faa2a78-6c08-44c4-a11d-b978b08cac9d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:49 crc kubenswrapper[4829]: I0217 15:58:49.898364 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9faa2a78-6c08-44c4-a11d-b978b08cac9d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.243922 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.245239 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.245881 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.246423 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405291 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405382 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.405480 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406122 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.406127 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.409344 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410482 4829 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" exitCode=0 Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410646 4829 scope.go:117] "RemoveContainer" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.410682 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.412738 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413216 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413312 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9faa2a78-6c08-44c4-a11d-b978b08cac9d","Type":"ContainerDied","Data":"fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f"} Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413347 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd92fffedffb0cf7185d5b526755fd0f403b238163a69324423526d002f032f" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.413541 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.421345 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.421888 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.440120 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.440733 4829 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.441444 4829 scope.go:117] "RemoveContainer" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.474946 4829 scope.go:117] "RemoveContainer" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.497955 4829 scope.go:117] "RemoveContainer" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.506970 4829 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.507019 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.507036 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.518892 4829 scope.go:117] "RemoveContainer" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.540522 4829 scope.go:117] "RemoveContainer" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.564201 4829 scope.go:117] "RemoveContainer" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.564994 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": container with ID starting with 978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973 not found: ID does not exist" containerID="978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.565118 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973"} err="failed to get container status \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": rpc error: code = NotFound desc = could not find container \"978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973\": container with ID starting with 978d2283e193b8649d3c3386c7e0bb48b09aa90b76d76e82e3518114cd521973 not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.565203 4829 scope.go:117] "RemoveContainer" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.567318 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": container with ID starting with 6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab not found: ID does not exist" containerID="6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567350 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab"} err="failed to get container status \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": rpc error: code = NotFound desc = could not find container \"6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab\": container with ID starting with 6281d5f148c9b5e2fdb0642b52aed2e7b123b0283c2ae6685ffa923434a1c8ab not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567374 4829 scope.go:117] "RemoveContainer" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.567793 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": container with ID starting with 433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b not found: ID does not exist" containerID="433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567812 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b"} err="failed to get container status \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": rpc error: code = NotFound desc = could not find container \"433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b\": container with ID starting with 433a6bcfcf7caaf0537624cc79aee40b46593c1ede1220512cde9e64b51bdd3b not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.567826 4829 scope.go:117] "RemoveContainer" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.568087 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": container with ID starting with b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e not found: ID does not exist" containerID="b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568162 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e"} err="failed to get container status \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": rpc error: code = NotFound desc = could not find container \"b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e\": container with ID starting with b31f024d5434b228414c20fe4326cba01a62a1c96ef3661dd407a81ea2122d8e not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568228 4829 scope.go:117] "RemoveContainer" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.568771 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": container with ID starting with 93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d not found: ID does not exist" containerID="93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568854 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d"} err="failed to get container status \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": rpc error: code = NotFound desc = could not find container \"93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d\": container with ID starting with 93bda794061070660b5be7243b06ec77e598c9027d49d12ca24625660815341d not found: ID does not exist" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.568921 4829 scope.go:117] "RemoveContainer" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: E0217 15:58:50.569394 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": container with ID starting with 8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503 not found: ID does not exist" containerID="8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503" Feb 17 15:58:50 crc kubenswrapper[4829]: I0217 15:58:50.569442 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503"} err="failed to get container status \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": rpc error: code = NotFound desc = could not find container \"8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503\": container with ID starting with 8b265e901400172960c51f0931bdf7ba341c214b5c728a997e92ec4614f7d503 not found: ID does not exist" Feb 17 15:58:51 crc kubenswrapper[4829]: E0217 15:58:51.091412 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 17 15:58:52 crc kubenswrapper[4829]: I0217 15:58:52.290170 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 15:58:52 crc kubenswrapper[4829]: E0217 15:58:52.946539 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:52 crc kubenswrapper[4829]: I0217 15:58:52.947049 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:52 crc kubenswrapper[4829]: W0217 15:58:52.975709 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e WatchSource:0}: Error finding container 1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e: Status 404 returned error can't find the container with id 1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e Feb 17 15:58:52 crc kubenswrapper[4829]: E0217 15:58:52.980313 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.431114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487"} Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.431458 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1fa4743bed30383e0858ddf7373f4d49bdaa656c080413cefad89de4d41b080e"} Feb 17 15:58:53 crc kubenswrapper[4829]: I0217 15:58:53.432209 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:53 crc kubenswrapper[4829]: E0217 15:58:53.432294 4829 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:54 crc kubenswrapper[4829]: E0217 15:58:54.212816 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:54 crc kubenswrapper[4829]: E0217 15:58:54.293351 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="6.4s" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.404750 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" containerID="cri-o://84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" gracePeriod=15 Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.861986 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.863101 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:57 crc kubenswrapper[4829]: I0217 15:58:57.863475 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013763 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013871 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013900 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.013985 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014013 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014051 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014076 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014102 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014127 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") pod \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\" (UID: \"f1ea7808-ad5e-47ee-a19b-4ece436be60d\") " Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.014537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.015356 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.015513 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.016289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.016618 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.021858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022166 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022678 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx" (OuterVolumeSpecName: "kube-api-access-vz7qx") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "kube-api-access-vz7qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.022943 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023092 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023493 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.023932 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.024844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f1ea7808-ad5e-47ee-a19b-4ece436be60d" (UID: "f1ea7808-ad5e-47ee-a19b-4ece436be60d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.115963 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116048 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116074 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116095 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7qx\" (UniqueName: \"kubernetes.io/projected/f1ea7808-ad5e-47ee-a19b-4ece436be60d-kube-api-access-vz7qx\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116113 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116132 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116150 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116169 4829 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116188 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116205 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116223 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116240 4829 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116257 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.116275 4829 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f1ea7808-ad5e-47ee-a19b-4ece436be60d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.283156 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.283776 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477013 4829 generic.go:334] "Generic (PLEG): container finished" podID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" exitCode=0 Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477065 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerDied","Data":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" event={"ID":"f1ea7808-ad5e-47ee-a19b-4ece436be60d","Type":"ContainerDied","Data":"7baa23e27dea651b430693897781e89b000dbe0f94cbc9c61bef0909c8c3ed1a"} Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.477909 4829 scope.go:117] "RemoveContainer" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.478959 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.479427 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.485154 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.485798 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.501937 4829 scope.go:117] "RemoveContainer" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: E0217 15:58:58.502298 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": container with ID starting with 84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b not found: ID does not exist" containerID="84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b" Feb 17 15:58:58 crc kubenswrapper[4829]: I0217 15:58:58.502337 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b"} err="failed to get container status \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": rpc error: code = NotFound desc = could not find container \"84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b\": container with ID starting with 84dbeaf8ee724ba7b97d87e1f5b07a71423b8bb3e52a7bf228357287a4c2cd0b not found: ID does not exist" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497297 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497357 4829 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5" exitCode=1 Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497390 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5"} Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.497961 4829 scope.go:117] "RemoveContainer" containerID="2f6fa9632d569f5f3f2647eed20c346c39ef986058a4c192a025b9a537fe6ec5" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.498424 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.499058 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: I0217 15:59:00.499566 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:00 crc kubenswrapper[4829]: E0217 15:59:00.694500 4829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="7s" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.512044 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.512163 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"945ab05d78771985d7fa10f19ef17c18cbbf9d2a96fc24cfe6096156651e53da"} Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.514069 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.514667 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:01 crc kubenswrapper[4829]: I0217 15:59:01.515412 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.278472 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.279986 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.280647 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.281235 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.305434 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.305493 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:03 crc kubenswrapper[4829]: E0217 15:59:03.306141 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.306781 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:03 crc kubenswrapper[4829]: W0217 15:59:03.340553 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5 WatchSource:0}: Error finding container be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5: Status 404 returned error can't find the container with id be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5 Feb 17 15:59:03 crc kubenswrapper[4829]: I0217 15:59:03.527279 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be76c430aa1b1bdf924307c6bf9fe2305613375ba74976ea2da7329d51e0f9c5"} Feb 17 15:59:04 crc kubenswrapper[4829]: E0217 15:59:04.213978 4829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513ec0c47f8f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,LastTimestamp:2026-02-17 15:58:52.979411188 +0000 UTC m=+245.396429176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.446793 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.454152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.454796 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.455434 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.455957 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.536539 4829 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bcb56bc01ac126b70d3ba476643d5384f1d58a222170d303030efc4d80185842" exitCode=0 Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.536653 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bcb56bc01ac126b70d3ba476643d5384f1d58a222170d303030efc4d80185842"} Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537189 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537568 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.537632 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.538219 4829 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: E0217 15:59:04.538219 4829 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.538814 4829 status_manager.go:851] "Failed to get status for pod" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" pod="openshift-authentication/oauth-openshift-558db77b4-8kmp8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-8kmp8\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:04 crc kubenswrapper[4829]: I0217 15:59:04.539275 4829 status_manager.go:851] "Failed to get status for pod" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.558192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f4c1a29b704d808d12087cc63e69a99ff7f44c7ecf17856837e6ce82b593deb"} Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.559210 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc8d6c678e3b71a2f08913ea321b5b856403c5d2299a6a02f3f5f4d2a9de8700"} Feb 17 15:59:05 crc kubenswrapper[4829]: I0217 15:59:05.559325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"28beff336ae2932e57e19638e46f2c1305e41ac5c7252c25229b4295568ab0e2"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.568989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"551796ff3d20fcedb09eb46ccc618e99f54e2af2d65e52d31493da2e84235bd1"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569297 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569304 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"de8cc5433242d2e33aec78e46c3a7546c0edc36b50fa91c0775c9e4f8b6fde9e"} Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.569319 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:06 crc kubenswrapper[4829]: I0217 15:59:06.570688 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.307385 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.307447 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:08 crc kubenswrapper[4829]: I0217 15:59:08.314800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:11 crc kubenswrapper[4829]: I0217 15:59:11.580696 4829 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:11 crc kubenswrapper[4829]: I0217 15:59:11.712000 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8dfacaac-c9f1-44a9-8bc9-62b7cf034443" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.602658 4829 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.603129 4829 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2af2d606-28d2-485f-a755-6a525fdbfcf2" Feb 17 15:59:12 crc kubenswrapper[4829]: I0217 15:59:12.605831 4829 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8dfacaac-c9f1-44a9-8bc9-62b7cf034443" Feb 17 15:59:19 crc kubenswrapper[4829]: I0217 15:59:19.902721 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.013332 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.583858 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.609822 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.724886 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:59:21 crc kubenswrapper[4829]: I0217 15:59:21.856015 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.006481 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.129158 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.593411 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.792912 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:59:22 crc kubenswrapper[4829]: I0217 15:59:22.945139 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.088713 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.150917 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.289482 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.337384 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.366419 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.443016 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:59:23 crc kubenswrapper[4829]: I0217 15:59:23.733066 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.013649 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.194121 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.243225 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.404356 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.411434 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.424193 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.432537 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.447945 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.642519 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.700552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.705252 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.725568 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.741197 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.843161 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.889613 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:59:24 crc kubenswrapper[4829]: I0217 15:59:24.953625 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.023409 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.100516 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.176375 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.213644 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.318977 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.430989 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.532140 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.532930 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.572525 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.786763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.906292 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:59:25 crc kubenswrapper[4829]: I0217 15:59:25.999485 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.018358 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.169806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.236978 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.247665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.332178 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.376232 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.566630 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.601451 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.646204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.764174 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.790462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.894856 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:59:26 crc kubenswrapper[4829]: I0217 15:59:26.902791 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.046829 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.167372 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.172420 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.190902 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.266987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.304195 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.327975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.390270 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.455984 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.566811 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.611432 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.656025 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.692595 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.764812 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.792186 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.875450 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.911889 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.922673 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:59:27 crc kubenswrapper[4829]: I0217 15:59:27.963373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.059719 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.125391 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.156950 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.194425 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.203856 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.241179 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.252622 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.273884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.343225 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.365163 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.492848 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.493730 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.504727 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.553762 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.695899 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.780341 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.809384 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:59:28 crc kubenswrapper[4829]: I0217 15:59:28.842665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.039497 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.067521 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.187828 4829 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.245822 4829 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.246520 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.259798 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.311250 4829 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.375912 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.396477 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.423934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.448676 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.524304 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.659091 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.694475 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.731298 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.766064 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:59:29 crc kubenswrapper[4829]: I0217 15:59:29.904649 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.191570 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.255696 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.343715 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.365236 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.426069 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.456340 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.500947 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.510687 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.516836 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.585024 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.794806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.840863 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.855068 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.858775 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.892373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:59:30 crc kubenswrapper[4829]: I0217 15:59:30.919387 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.022981 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.028211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.148369 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.214701 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.223520 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.284242 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.301987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.355447 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.500204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.573099 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.619987 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.651533 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.657078 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.750064 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.850975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.869107 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.899034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:59:31 crc kubenswrapper[4829]: I0217 15:59:31.929073 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.010838 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.020166 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.070847 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.189190 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.413541 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.461608 4829 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.613348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.715622 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.827925 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.931090 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.942200 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.945820 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.955267 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:59:32 crc kubenswrapper[4829]: I0217 15:59:32.977602 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.078933 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.123617 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.160330 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.161319 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.223852 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.456670 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.459858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.485974 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.514543 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.515619 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.741487 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.871184 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.971589 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:59:33 crc kubenswrapper[4829]: I0217 15:59:33.973835 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.149164 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.197754 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.244545 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.259905 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.355134 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.403386 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.406449 4829 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.413540 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8kmp8","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.413672 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.419009 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.427906 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.428009 4829 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.432834 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.432817625 podStartE2EDuration="23.432817625s" podCreationTimestamp="2026-02-17 15:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:59:34.431704203 +0000 UTC m=+286.848722211" watchObservedRunningTime="2026-02-17 15:59:34.432817625 +0000 UTC m=+286.849835603" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.572216 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.580981 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.643707 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.658858 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.749380 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:59:34 crc kubenswrapper[4829]: I0217 15:59:34.845972 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.102654 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.185672 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.392442 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.574389 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.612233 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.688140 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.748424 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.750187 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.791106 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:59:35 crc kubenswrapper[4829]: I0217 15:59:35.852212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.109545 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.272117 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.286980 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" path="/var/lib/kubelet/pods/f1ea7808-ad5e-47ee-a19b-4ece436be60d/volumes" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.351217 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.434047 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.500924 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.540784 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.634078 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.666665 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.711194 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.742949 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.850056 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.941154 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:59:36 crc kubenswrapper[4829]: I0217 15:59:36.951066 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.017932 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.044672 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 15:59:37 crc kubenswrapper[4829]: E0217 15:59:37.045094 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045167 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: E0217 15:59:37.045229 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045287 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045439 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9faa2a78-6c08-44c4-a11d-b978b08cac9d" containerName="installer" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045508 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ea7808-ad5e-47ee-a19b-4ece436be60d" containerName="oauth-openshift" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.045914 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.049654 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.050260 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.050406 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051197 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051652 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.051989 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.052392 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053116 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053503 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.053721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.054512 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.054866 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.073744 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.074194 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.076717 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.087364 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.147382 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.233877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234391 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234598 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.234957 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235282 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.235345 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.336748 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337410 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.337660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338772 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338833 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338880 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338880 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-dir\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338929 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339001 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339068 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.339102 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.338479 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.340175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-service-ca\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.340669 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-audit-policies\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.344387 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.344464 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-router-certs\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-login\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346365 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-session\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.346923 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.347048 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-template-error\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.347718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.349389 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.371557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl55h\" (UniqueName: \"kubernetes.io/projected/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4-kube-api-access-rl55h\") pod \"oauth-openshift-798f497965-xwsng\" (UID: \"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\") " pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.383002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.648084 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:59:37 crc kubenswrapper[4829]: I0217 15:59:37.664482 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.629391 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630100 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630133 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:40 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b" Netns:"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:40 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:40 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: E0217 15:59:40.630211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b\\\" Netns:\\\"/var/run/netns/513458d9-1899-4d91-b443-ebf4577de64a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=dde31e6c4c646267ceb0011e5bbeef4dbb60d358a1413d98284eabb33c5ee37b;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 15:59:40 crc kubenswrapper[4829]: I0217 15:59:40.779165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:40 crc kubenswrapper[4829]: I0217 15:59:40.779922 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767207 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767880 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.767916 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:43 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6" Netns:"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:43 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:43 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:43 crc kubenswrapper[4829]: E0217 15:59:43.768013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6\\\" Netns:\\\"/var/run/netns/c8a545f8-0555-47a0-b28a-897a0c04a013\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=eb37a38fa0109c6a79e34da3335a26896689d5dcad815313d6d963c159b516c6;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 15:59:45 crc kubenswrapper[4829]: I0217 15:59:45.570448 4829 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:59:45 crc kubenswrapper[4829]: I0217 15:59:45.571305 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" gracePeriod=5 Feb 17 15:59:48 crc kubenswrapper[4829]: I0217 15:59:48.062088 4829 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.840006 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" exitCode=0 Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.840125 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} Feb 17 15:59:49 crc kubenswrapper[4829]: I0217 15:59:49.841426 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.850101 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.850381 4829 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" exitCode=137 Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.852502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerStarted","Data":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.852946 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:59:50 crc kubenswrapper[4829]: I0217 15:59:50.854656 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.146279 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.146419 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.249972 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250066 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250214 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250229 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250254 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250319 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250352 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250880 4829 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250909 4829 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250926 4829 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.250943 4829 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.258796 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.352356 4829 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.863860 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.864101 4829 scope.go:117] "RemoveContainer" containerID="b00141202ae2e3518ef2bf316c4b6b16623855bedcc67dcd81058a7b314c0487" Feb 17 15:59:51 crc kubenswrapper[4829]: I0217 15:59:51.864292 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:59:52 crc kubenswrapper[4829]: I0217 15:59:52.290758 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 15:59:54 crc kubenswrapper[4829]: I0217 15:59:54.936470 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:59:55 crc kubenswrapper[4829]: I0217 15:59:55.513381 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.104009 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.145131 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.278898 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.279542 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.306319 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:59:56 crc kubenswrapper[4829]: I0217 15:59:56.519481 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:59:58 crc kubenswrapper[4829]: I0217 15:59:58.999371 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563123 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563209 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563242 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 15:59:59 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0" Netns:"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod "oauth-openshift-798f497965-xwsng" not found Feb 17 15:59:59 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 15:59:59 crc kubenswrapper[4829]: > pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 15:59:59 crc kubenswrapper[4829]: E0217 15:59:59.563331 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-798f497965-xwsng_openshift-authentication(20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-798f497965-xwsng_openshift-authentication_20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4_0(5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0): error adding pod openshift-authentication_oauth-openshift-798f497965-xwsng to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0\\\" Netns:\\\"/var/run/netns/efb82b0e-f80e-49fb-8dd7-890aa12dc492\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-798f497965-xwsng;K8S_POD_INFRA_CONTAINER_ID=5fc62795ad9e516578a93d4db906604662b2d5a9c396d9b7ac1152b663b5dbc0;K8S_POD_UID=20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-798f497965-xwsng] networking: Multus: [openshift-authentication/oauth-openshift-798f497965-xwsng/20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-798f497965-xwsng in out of cluster comm: pod \\\"oauth-openshift-798f497965-xwsng\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podUID="20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206348 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:00 crc kubenswrapper[4829]: E0217 16:00:00.206629 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206644 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.206790 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.207269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.209759 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.210354 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.225195 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.393827 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.394302 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.394346 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495413 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.495544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.497816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.502609 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.517000 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"collect-profiles-29522400-sbp9p\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.530412 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:00 crc kubenswrapper[4829]: I0217 16:00:00.559764 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:00:01 crc kubenswrapper[4829]: I0217 16:00:01.308045 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:00:01 crc kubenswrapper[4829]: I0217 16:00:01.470348 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.102408 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.102731 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" containerID="cri-o://335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" gracePeriod=30 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.111056 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.199837 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.200079 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" containerID="cri-o://659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" gracePeriod=30 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.320547 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.496777 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525271 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525367 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.525396 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") pod \"16271aa7-2602-467c-b9aa-31c491952eb8\" (UID: \"16271aa7-2602-467c-b9aa-31c491952eb8\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526200 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526231 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.526644 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config" (OuterVolumeSpecName: "config") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.531077 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.532055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk" (OuterVolumeSpecName: "kube-api-access-5w9jk") pod "16271aa7-2602-467c-b9aa-31c491952eb8" (UID: "16271aa7-2602-467c-b9aa-31c491952eb8"). InnerVolumeSpecName "kube-api-access-5w9jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.553237 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626392 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626426 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626436 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16271aa7-2602-467c-b9aa-31c491952eb8-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626444 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16271aa7-2602-467c-b9aa-31c491952eb8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.626453 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w9jk\" (UniqueName: \"kubernetes.io/projected/16271aa7-2602-467c-b9aa-31c491952eb8-kube-api-access-5w9jk\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.727938 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.728078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.728757 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.729112 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") pod \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\" (UID: \"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e\") " Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.730103 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.730344 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config" (OuterVolumeSpecName: "config") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.733627 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8" (OuterVolumeSpecName: "kube-api-access-svwh8") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "kube-api-access-svwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.733691 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" (UID: "8f19ab1b-c5ef-4cde-9145-cec00ae7a64e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831715 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831761 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831779 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svwh8\" (UniqueName: \"kubernetes.io/projected/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-kube-api-access-svwh8\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.831797 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.901163 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937316 4829 generic.go:334] "Generic (PLEG): container finished" podID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" exitCode=0 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937397 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerDied","Data":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937499 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj" event={"ID":"8f19ab1b-c5ef-4cde-9145-cec00ae7a64e","Type":"ContainerDied","Data":"6a23ac3a0952fee762d7b612b6d50abf950d5b8d2ac6689a55a814e3e26c2a02"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.937538 4829 scope.go:117] "RemoveContainer" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939745 4829 generic.go:334] "Generic (PLEG): container finished" podID="16271aa7-2602-467c-b9aa-31c491952eb8" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" exitCode=0 Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939789 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerDied","Data":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" event={"ID":"16271aa7-2602-467c-b9aa-31c491952eb8","Type":"ContainerDied","Data":"8de47067337388c88e7fd0377c70063d3507f99b16f2a38f0c76133107e5774a"} Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.939856 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xn8fx" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.963816 4829 scope.go:117] "RemoveContainer" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: E0217 16:00:02.964344 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": container with ID starting with 659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40 not found: ID does not exist" containerID="659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.964411 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40"} err="failed to get container status \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": rpc error: code = NotFound desc = could not find container \"659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40\": container with ID starting with 659abb7192cc4953e266c8d7e736d94241323a469e0367e595e1892bf6940b40 not found: ID does not exist" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.964448 4829 scope.go:117] "RemoveContainer" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.994544 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.998635 4829 scope.go:117] "RemoveContainer" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: E0217 16:00:02.999549 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": container with ID starting with 335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026 not found: ID does not exist" containerID="335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026" Feb 17 16:00:02 crc kubenswrapper[4829]: I0217 16:00:02.999771 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026"} err="failed to get container status \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": rpc error: code = NotFound desc = could not find container \"335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026\": container with ID starting with 335590e9d1b15fc78a06f32d646dee325fe23b49de7335704f5ad6181b02c026 not found: ID does not exist" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.006049 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xn8fx"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.012919 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.019102 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9v7jj"] Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.460424 4829 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.460980 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.461020 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 17 16:00:03 crc kubenswrapper[4829]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e" Netns:"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod "collect-profiles-29522400-sbp9p" not found Feb 17 16:00:03 crc kubenswrapper[4829]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:00:03 crc kubenswrapper[4829]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.461147 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager(5695ec4a-a69a-4e62-9ddd-c9cea43413a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager(5695ec4a-a69a-4e62-9ddd-c9cea43413a9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29522400-sbp9p_openshift-operator-lifecycle-manager_5695ec4a-a69a-4e62-9ddd-c9cea43413a9_0(30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29522400-sbp9p to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e\\\" Netns:\\\"/var/run/netns/de6ffe3d-8757-4a4e-b4c5-c1dbc936f9f6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29522400-sbp9p;K8S_POD_INFRA_CONTAINER_ID=30544bcc7bb65a1adb29ce165af86285b4bc289d5240d6ae23273cca32ce5f1e;K8S_POD_UID=5695ec4a-a69a-4e62-9ddd-c9cea43413a9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p/5695ec4a-a69a-4e62-9ddd-c9cea43413a9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29522400-sbp9p in out of cluster comm: pod \\\"collect-profiles-29522400-sbp9p\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.565631 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.566252 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566295 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: E0217 16:00:03.566340 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566357 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566714 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" containerName="route-controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.566763 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" containerName="controller-manager" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.569321 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.571622 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.572796 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.573936 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574520 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574568 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574839 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.574995 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.578776 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586252 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586457 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586652 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.586834 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.587294 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.587794 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.588426 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.597660 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642838 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642895 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.642985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643000 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643023 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.643122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744769 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.744913 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.747197 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.747262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.748100 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.749838 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.750073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.757430 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.762794 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.774857 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"controller-manager-8949fdbb5-hmjs5\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.779122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"route-controller-manager-6dfd847c67-kgxzq\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.903986 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.918760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.950254 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:03 crc kubenswrapper[4829]: I0217 16:00:03.950769 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.287970 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16271aa7-2602-467c-b9aa-31c491952eb8" path="/var/lib/kubelet/pods/16271aa7-2602-467c-b9aa-31c491952eb8/volumes" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.289429 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f19ab1b-c5ef-4cde-9145-cec00ae7a64e" path="/var/lib/kubelet/pods/8f19ab1b-c5ef-4cde-9145-cec00ae7a64e/volumes" Feb 17 16:00:04 crc kubenswrapper[4829]: I0217 16:00:04.539001 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.105522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.118190 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.280213 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:05 crc kubenswrapper[4829]: W0217 16:00:05.285125 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d0d4bd_3c46_47c4_bc3d_25f039cf2f80.slice/crio-598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836 WatchSource:0}: Error finding container 598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836: Status 404 returned error can't find the container with id 598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836 Feb 17 16:00:05 crc kubenswrapper[4829]: W0217 16:00:05.287332 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a661d1_dfe2_47e8_bf1a_9b4563e546cf.slice/crio-1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749 WatchSource:0}: Error finding container 1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749: Status 404 returned error can't find the container with id 1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749 Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.289370 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.965968 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerStarted","Data":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.966045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerStarted","Data":"1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.966073 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967423 4829 generic.go:334] "Generic (PLEG): container finished" podID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerID="389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d" exitCode=0 Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerDied","Data":"389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.967475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerStarted","Data":"c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968550 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerStarted","Data":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerStarted","Data":"598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836"} Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.968857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.975265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:05 crc kubenswrapper[4829]: I0217 16:00:05.992157 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" podStartSLOduration=3.9921433950000003 podStartE2EDuration="3.992143395s" podCreationTimestamp="2026-02-17 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:05.989186913 +0000 UTC m=+318.406204891" watchObservedRunningTime="2026-02-17 16:00:05.992143395 +0000 UTC m=+318.409161373" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.034001 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.042486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" podStartSLOduration=4.042457312 podStartE2EDuration="4.042457312s" podCreationTimestamp="2026-02-17 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:06.036096825 +0000 UTC m=+318.453114803" watchObservedRunningTime="2026-02-17 16:00:06.042457312 +0000 UTC m=+318.459475330" Feb 17 16:00:06 crc kubenswrapper[4829]: I0217 16:00:06.366651 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.123370 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.331523 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389542 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.389729 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") pod \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\" (UID: \"5695ec4a-a69a-4e62-9ddd-c9cea43413a9\") " Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.390820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.397388 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz" (OuterVolumeSpecName: "kube-api-access-sjwfz") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "kube-api-access-sjwfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.398364 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5695ec4a-a69a-4e62-9ddd-c9cea43413a9" (UID: "5695ec4a-a69a-4e62-9ddd-c9cea43413a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491111 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491170 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwfz\" (UniqueName: \"kubernetes.io/projected/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-kube-api-access-sjwfz\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.491192 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5695ec4a-a69a-4e62-9ddd-c9cea43413a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.984996 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" event={"ID":"5695ec4a-a69a-4e62-9ddd-c9cea43413a9","Type":"ContainerDied","Data":"c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695"} Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.985074 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5987648db2544274abf75d9fb0934925a7dc6284572d1368799ed498c14e695" Feb 17 16:00:07 crc kubenswrapper[4829]: I0217 16:00:07.985301 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p" Feb 17 16:00:08 crc kubenswrapper[4829]: I0217 16:00:08.074059 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:00:08 crc kubenswrapper[4829]: I0217 16:00:08.245723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:00:09 crc kubenswrapper[4829]: I0217 16:00:09.375676 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.039702 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.039983 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" containerID="cri-o://0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" gracePeriod=30 Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.062506 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.062824 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" containerID="cri-o://ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" gracePeriod=30 Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.278354 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.278905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.409125 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.502716 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.508483 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636454 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636657 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636715 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636774 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636902 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636931 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") pod \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\" (UID: \"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.636966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") pod \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\" (UID: \"87a661d1-dfe2-47e8-bf1a-9b4563e546cf\") " Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637596 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637853 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.637877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config" (OuterVolumeSpecName: "config") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.638077 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.638266 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config" (OuterVolumeSpecName: "config") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng" (OuterVolumeSpecName: "kube-api-access-bw2ng") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "kube-api-access-bw2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641850 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2" (OuterVolumeSpecName: "kube-api-access-qt5w2") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "kube-api-access-qt5w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.641973 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" (UID: "d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.642841 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87a661d1-dfe2-47e8-bf1a-9b4563e546cf" (UID: "87a661d1-dfe2-47e8-bf1a-9b4563e546cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.738811 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739102 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739154 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739179 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739206 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt5w2\" (UniqueName: \"kubernetes.io/projected/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-kube-api-access-qt5w2\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739232 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739281 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739301 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw2ng\" (UniqueName: \"kubernetes.io/projected/87a661d1-dfe2-47e8-bf1a-9b4563e546cf-kube-api-access-bw2ng\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.739323 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:10 crc kubenswrapper[4829]: I0217 16:00:10.751494 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798f497965-xwsng"] Feb 17 16:00:10 crc kubenswrapper[4829]: W0217 16:00:10.755731 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ddca7e_d4a1_4a03_95d2_6c3b1c2ba6c4.slice/crio-9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286 WatchSource:0}: Error finding container 9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286: Status 404 returned error can't find the container with id 9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.006745 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" event={"ID":"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4","Type":"ContainerStarted","Data":"9a964a30de72256fc8052733dc24b01f330d9700746f967834f0dc75ef587286"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009199 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" exitCode=0 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerDied","Data":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009323 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009347 4829 scope.go:117] "RemoveContainer" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.009328 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8949fdbb5-hmjs5" event={"ID":"d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80","Type":"ContainerDied","Data":"598de25df9f776ff5bdaa35b6a00ec2ebc2dbc1a7de2a06a1702ad213d7fc836"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.012996 4829 generic.go:334] "Generic (PLEG): container finished" podID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" exitCode=0 Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013071 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerDied","Data":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" event={"ID":"87a661d1-dfe2-47e8-bf1a-9b4563e546cf","Type":"ContainerDied","Data":"1bf64d976ccb510d5880e207387bab2469884ee0aec0d2aef9d26429f138b749"} Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.013110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.044704 4829 scope.go:117] "RemoveContainer" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.045398 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": container with ID starting with 0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818 not found: ID does not exist" containerID="0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.045453 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818"} err="failed to get container status \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": rpc error: code = NotFound desc = could not find container \"0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818\": container with ID starting with 0058f345011074a99b51dd156799f0f20ce1519662ae5153e25e1ad2683e7818 not found: ID does not exist" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.045490 4829 scope.go:117] "RemoveContainer" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.065995 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.070356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8949fdbb5-hmjs5"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.079216 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.081652 4829 scope.go:117] "RemoveContainer" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.082216 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": container with ID starting with ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509 not found: ID does not exist" containerID="ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.082271 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509"} err="failed to get container status \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": rpc error: code = NotFound desc = could not find container \"ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509\": container with ID starting with ec086b713f405292ba913fbf4b39d07641bd6989ff4db336515d957389b53509 not found: ID does not exist" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.085361 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfd847c67-kgxzq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581727 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581748 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581772 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581784 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: E0217 16:00:11.581805 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.581819 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582034 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" containerName="controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582068 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" containerName="collect-profiles" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582092 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" containerName="route-controller-manager" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.582780 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.588740 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.588985 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.589068 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590555 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590856 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.590915 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.600450 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.600474 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.602206 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.612650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615269 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615453 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.615827 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.616381 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.616765 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.620774 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.625292 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752603 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752655 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752767 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752900 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752941 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.752992 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753154 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.753281 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.854893 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855103 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855232 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855264 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855323 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.855407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.856435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.857139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.857891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-client-ca\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.858727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff2dc4ce-73aa-4af1-92bc-480766efec5f-config\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.865178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff2dc4ce-73aa-4af1-92bc-480766efec5f-serving-cert\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.867666 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.874765 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.883252 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk5fk\" (UniqueName: \"kubernetes.io/projected/ff2dc4ce-73aa-4af1-92bc-480766efec5f-kube-api-access-vk5fk\") pod \"route-controller-manager-7889c76dc5-qpfqb\" (UID: \"ff2dc4ce-73aa-4af1-92bc-480766efec5f\") " pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.886179 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"controller-manager-55cd48b6b9-h5glq\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.905507 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:11 crc kubenswrapper[4829]: I0217 16:00:11.934293 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.060021 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" event={"ID":"20ddca7e-d4a1-4a03-95d2-6c3b1c2ba6c4","Type":"ContainerStarted","Data":"1f8f34da87ac3541d3268f757fb3317046bad80af6ec5c1cf136c6d5d053a8f6"} Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.061272 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.069933 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.130245 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-798f497965-xwsng" podStartSLOduration=100.130204617 podStartE2EDuration="1m40.130204617s" podCreationTimestamp="2026-02-17 15:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:12.106080863 +0000 UTC m=+324.523098851" watchObservedRunningTime="2026-02-17 16:00:12.130204617 +0000 UTC m=+324.547222595" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.186564 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb"] Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.289825 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a661d1-dfe2-47e8-bf1a-9b4563e546cf" path="/var/lib/kubelet/pods/87a661d1-dfe2-47e8-bf1a-9b4563e546cf/volumes" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.291080 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80" path="/var/lib/kubelet/pods/d8d0d4bd-3c46-47c4-bc3d-25f039cf2f80/volumes" Feb 17 16:00:12 crc kubenswrapper[4829]: I0217 16:00:12.468979 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.067778 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerStarted","Data":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.068119 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerStarted","Data":"9296fc8a05e64c8caca2c8a1392a0740bf17e8421ebdcec6c4d6a1bf074bfb8e"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069401 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069882 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" event={"ID":"ff2dc4ce-73aa-4af1-92bc-480766efec5f","Type":"ContainerStarted","Data":"38d3ac6eefa5fb175f4e1a9e6d36087b28207773546c2cb8c6b7e2ee19de20c8"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.069921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" event={"ID":"ff2dc4ce-73aa-4af1-92bc-480766efec5f","Type":"ContainerStarted","Data":"00d5067c34eb9a6b8d3c5bd1bf0a4b1a860ef0999178895bd60d6e2c48490c9f"} Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.070308 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.073003 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.077326 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.091702 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" podStartSLOduration=3.091684154 podStartE2EDuration="3.091684154s" podCreationTimestamp="2026-02-17 16:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:13.089146033 +0000 UTC m=+325.506164031" watchObservedRunningTime="2026-02-17 16:00:13.091684154 +0000 UTC m=+325.508702132" Feb 17 16:00:13 crc kubenswrapper[4829]: I0217 16:00:13.103514 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7889c76dc5-qpfqb" podStartSLOduration=3.103494394 podStartE2EDuration="3.103494394s" podCreationTimestamp="2026-02-17 16:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:13.10192145 +0000 UTC m=+325.518939428" watchObservedRunningTime="2026-02-17 16:00:13.103494394 +0000 UTC m=+325.520512382" Feb 17 16:00:18 crc kubenswrapper[4829]: I0217 16:00:18.008781 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.118723 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.119517 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" containerID="cri-o://68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" gracePeriod=30 Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.598861 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.713913 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714189 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") pod \"0bb5db83-ef1f-4e88-9d1c-d01334049378\" (UID: \"0bb5db83-ef1f-4e88-9d1c-d01334049378\") " Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714857 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca" (OuterVolumeSpecName: "client-ca") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714875 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config" (OuterVolumeSpecName: "config") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.714967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715281 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715329 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.715351 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bb5db83-ef1f-4e88-9d1c-d01334049378-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.719203 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926" (OuterVolumeSpecName: "kube-api-access-9s926") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "kube-api-access-9s926". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.723677 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0bb5db83-ef1f-4e88-9d1c-d01334049378" (UID: "0bb5db83-ef1f-4e88-9d1c-d01334049378"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.816965 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bb5db83-ef1f-4e88-9d1c-d01334049378-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:24 crc kubenswrapper[4829]: I0217 16:00:24.817014 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s926\" (UniqueName: \"kubernetes.io/projected/0bb5db83-ef1f-4e88-9d1c-d01334049378-kube-api-access-9s926\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159714 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" exitCode=0 Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerDied","Data":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159790 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" event={"ID":"0bb5db83-ef1f-4e88-9d1c-d01334049378","Type":"ContainerDied","Data":"9296fc8a05e64c8caca2c8a1392a0740bf17e8421ebdcec6c4d6a1bf074bfb8e"} Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159813 4829 scope.go:117] "RemoveContainer" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.159939 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-h5glq" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.186105 4829 scope.go:117] "RemoveContainer" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: E0217 16:00:25.186660 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": container with ID starting with 68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c not found: ID does not exist" containerID="68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.186710 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c"} err="failed to get container status \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": rpc error: code = NotFound desc = could not find container \"68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c\": container with ID starting with 68d0c08669a66c5a7cc6a0b203fba63d5b2bcd0999b5a166f399f5fd6acaf98c not found: ID does not exist" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.187524 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.191483 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-h5glq"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581056 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:25 crc kubenswrapper[4829]: E0217 16:00:25.581413 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581443 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.581637 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" containerName="controller-manager" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.582177 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.584278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.584941 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.585879 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.586459 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.587582 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.588661 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.597606 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.598484 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.632841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.633379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735065 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735728 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.735897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.736213 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.737026 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.737088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.739554 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.763162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"controller-manager-5747cbd54d-48vhk\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:25 crc kubenswrapper[4829]: I0217 16:00:25.912085 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:26 crc kubenswrapper[4829]: I0217 16:00:26.289440 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb5db83-ef1f-4e88-9d1c-d01334049378" path="/var/lib/kubelet/pods/0bb5db83-ef1f-4e88-9d1c-d01334049378/volumes" Feb 17 16:00:26 crc kubenswrapper[4829]: I0217 16:00:26.301911 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerStarted","Data":"415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d"} Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172297 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerStarted","Data":"beacd0d0ef6626d35fb52988e3bbd5f44ad53ca81aceba78081f2a53436b10ca"} Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.172681 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.177621 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:00:27 crc kubenswrapper[4829]: I0217 16:00:27.189700 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" podStartSLOduration=3.189682865 podStartE2EDuration="3.189682865s" podCreationTimestamp="2026-02-17 16:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:27.186700822 +0000 UTC m=+339.603718820" watchObservedRunningTime="2026-02-17 16:00:27.189682865 +0000 UTC m=+339.606700843" Feb 17 16:00:52 crc kubenswrapper[4829]: I0217 16:00:52.424678 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:00:52 crc kubenswrapper[4829]: I0217 16:00:52.426037 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.074649 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.076797 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" containerID="cri-o://415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" gracePeriod=30 Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.397368 4829 generic.go:334] "Generic (PLEG): container finished" podID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerID="415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" exitCode=0 Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.397466 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerDied","Data":"415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d"} Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.493962 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603753 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603950 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.603972 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") pod \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\" (UID: \"0d4e94d2-8fbf-47b1-acd8-b79b18470a25\") " Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.604658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.604837 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config" (OuterVolumeSpecName: "config") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.605167 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.608702 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.608878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn" (OuterVolumeSpecName: "kube-api-access-d95xn") pod "0d4e94d2-8fbf-47b1-acd8-b79b18470a25" (UID: "0d4e94d2-8fbf-47b1-acd8-b79b18470a25"). InnerVolumeSpecName "kube-api-access-d95xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705351 4829 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705402 4829 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705410 4829 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705420 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:02 crc kubenswrapper[4829]: I0217 16:01:02.705429 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d95xn\" (UniqueName: \"kubernetes.io/projected/0d4e94d2-8fbf-47b1-acd8-b79b18470a25-kube-api-access-d95xn\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.183772 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:03 crc kubenswrapper[4829]: E0217 16:01:03.184059 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.184076 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.185606 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" containerName="controller-manager" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.186088 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.213626 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314252 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314642 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314675 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.314709 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.340114 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" event={"ID":"0d4e94d2-8fbf-47b1-acd8-b79b18470a25","Type":"ContainerDied","Data":"beacd0d0ef6626d35fb52988e3bbd5f44ad53ca81aceba78081f2a53436b10ca"} Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405099 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5747cbd54d-48vhk" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.405345 4829 scope.go:117] "RemoveContainer" containerID="415fd3fb2ef9f71ba6eeea6c925e6c61ca7a8406d78a0cd2696465b4a7319e1d" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415397 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415442 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415468 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415502 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415528 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.415549 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.417096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-trusted-ca\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.417331 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-certificates\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.418086 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5eaf5db2-3348-4197-b96d-bf04627f6aae-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.419361 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5eaf5db2-3348-4197-b96d-bf04627f6aae-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.419803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-registry-tls\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.432820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-bound-sa-token\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.435475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nvz\" (UniqueName: \"kubernetes.io/projected/5eaf5db2-3348-4197-b96d-bf04627f6aae-kube-api-access-j9nvz\") pod \"image-registry-66df7c8f76-gvpwt\" (UID: \"5eaf5db2-3348-4197-b96d-bf04627f6aae\") " pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.476069 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.479672 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5747cbd54d-48vhk"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.501263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.609338 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.610172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.615689 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.615965 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616472 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.616823 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.617067 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.623647 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.627220 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722169 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722370 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722401 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.722451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823574 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823680 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823753 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.823788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.824963 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-proxy-ca-bundles\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.825347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-client-ca\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.826926 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f31a99f-549f-4e80-b051-ce65bbe55c09-config\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.830868 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f31a99f-549f-4e80-b051-ce65bbe55c09-serving-cert\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.849516 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stscz\" (UniqueName: \"kubernetes.io/projected/0f31a99f-549f-4e80-b051-ce65bbe55c09-kube-api-access-stscz\") pod \"controller-manager-55cd48b6b9-kw4f6\" (UID: \"0f31a99f-549f-4e80-b051-ce65bbe55c09\") " pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.934424 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:03 crc kubenswrapper[4829]: I0217 16:01:03.944566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gvpwt"] Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.157740 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6"] Feb 17 16:01:04 crc kubenswrapper[4829]: W0217 16:01:04.163198 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f31a99f_549f_4e80_b051_ce65bbe55c09.slice/crio-f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb WatchSource:0}: Error finding container f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb: Status 404 returned error can't find the container with id f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.285708 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4e94d2-8fbf-47b1-acd8-b79b18470a25" path="/var/lib/kubelet/pods/0d4e94d2-8fbf-47b1-acd8-b79b18470a25/volumes" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412084 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" event={"ID":"5eaf5db2-3348-4197-b96d-bf04627f6aae","Type":"ContainerStarted","Data":"717b178b185b59b96ad734a9d09feb405a12579b5e7b499ed809d2d545b77f09"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412137 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" event={"ID":"5eaf5db2-3348-4197-b96d-bf04627f6aae","Type":"ContainerStarted","Data":"a963822f6ecfd6ed23945d6354924eb6a8af70006e2f0e6e7b4488d03be0d21f"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.412185 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.413487 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" event={"ID":"0f31a99f-549f-4e80-b051-ce65bbe55c09","Type":"ContainerStarted","Data":"010d62862df4f79ef60ebc758961f663abdc107f0cb4ac7d0d619c04a67c0d8e"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.413539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" event={"ID":"0f31a99f-549f-4e80-b051-ce65bbe55c09","Type":"ContainerStarted","Data":"f005ea2f6deab81c78da753998802e503afddfbf54ad6fbba7085c0913cab9eb"} Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.414369 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.422540 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.439691 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" podStartSLOduration=1.439667171 podStartE2EDuration="1.439667171s" podCreationTimestamp="2026-02-17 16:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:04.434527558 +0000 UTC m=+376.851545556" watchObservedRunningTime="2026-02-17 16:01:04.439667171 +0000 UTC m=+376.856685159" Feb 17 16:01:04 crc kubenswrapper[4829]: I0217 16:01:04.460688 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55cd48b6b9-kw4f6" podStartSLOduration=2.46067037 podStartE2EDuration="2.46067037s" podCreationTimestamp="2026-02-17 16:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:04.459354173 +0000 UTC m=+376.876372151" watchObservedRunningTime="2026-02-17 16:01:04.46067037 +0000 UTC m=+376.877688348" Feb 17 16:01:22 crc kubenswrapper[4829]: I0217 16:01:22.425078 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:01:22 crc kubenswrapper[4829]: I0217 16:01:22.427086 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:23 crc kubenswrapper[4829]: I0217 16:01:23.516268 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gvpwt" Feb 17 16:01:23 crc kubenswrapper[4829]: I0217 16:01:23.602107 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.658755 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.659958 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4qsx" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" containerID="cri-o://a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.666191 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.666795 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plxhn" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" containerID="cri-o://9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.672061 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.672240 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" containerID="cri-o://c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.701215 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.701548 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lg78k" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" containerID="cri-o://7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.712057 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.712381 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pzvbr" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" containerID="cri-o://cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" gracePeriod=30 Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.717929 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.718872 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.723498 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896734 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896794 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.896838 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.998997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.999065 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:26 crc kubenswrapper[4829]: I0217 16:01:26.999146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.001494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.017562 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.023241 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb5sr\" (UniqueName: \"kubernetes.io/projected/1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9-kube-api-access-tb5sr\") pod \"marketplace-operator-79b997595-dk6vq\" (UID: \"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9\") " pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.185414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.197871 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303302 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303366 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.303440 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") pod \"980a7ff9-af1a-413c-8573-00243ed3ece1\" (UID: \"980a7ff9-af1a-413c-8573-00243ed3ece1\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.305121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities" (OuterVolumeSpecName: "utilities") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.307512 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt" (OuterVolumeSpecName: "kube-api-access-k6kjt") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "kube-api-access-k6kjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.350877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "980a7ff9-af1a-413c-8573-00243ed3ece1" (UID: "980a7ff9-af1a-413c-8573-00243ed3ece1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410226 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410268 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6kjt\" (UniqueName: \"kubernetes.io/projected/980a7ff9-af1a-413c-8573-00243ed3ece1-kube-api-access-k6kjt\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.410287 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980a7ff9-af1a-413c-8573-00243ed3ece1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.421180 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.427488 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.443445 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.511450 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") pod \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\" (UID: \"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.512717 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.516107 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.531527 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8" (OuterVolumeSpecName: "kube-api-access-m2ld8") pod "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" (UID: "dd8fe958-b9ba-48ef-ba18-57fd0eec43dd"). InnerVolumeSpecName "kube-api-access-m2ld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603361 4829 generic.go:334] "Generic (PLEG): container finished" podID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603482 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" event={"ID":"dd8fe958-b9ba-48ef-ba18-57fd0eec43dd","Type":"ContainerDied","Data":"e87972fe228716c21ec7cecb1607e14e50dea5013a2a6768e543463984d2ebe1"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603429 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zn4qs" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.603498 4829 scope.go:117] "RemoveContainer" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.606249 4829 generic.go:334] "Generic (PLEG): container finished" podID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerID="9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.606313 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608275 4829 generic.go:334] "Generic (PLEG): container finished" podID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608354 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg78k" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.608372 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg78k" event={"ID":"bedc9476-2a16-46d6-8764-8fd184304b5f","Type":"ContainerDied","Data":"d19f6da1913041c5fd10e98efa71ae0ed6c2d8facfc11c2aa17840a88a15c77f"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610022 4829 generic.go:334] "Generic (PLEG): container finished" podID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610069 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzvbr" event={"ID":"d8370c4f-c05e-425c-a267-c270e36b5dfd","Type":"ContainerDied","Data":"d88ae7ce66cddd428f6c7659ec0052182a3e020bdd280801c5c5478b8fa7cde4"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.610088 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzvbr" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612100 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612214 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") pod \"d8370c4f-c05e-425c-a267-c270e36b5dfd\" (UID: \"d8370c4f-c05e-425c-a267-c270e36b5dfd\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.612353 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") pod \"bedc9476-2a16-46d6-8764-8fd184304b5f\" (UID: \"bedc9476-2a16-46d6-8764-8fd184304b5f\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613137 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613163 4829 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.613314 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ld8\" (UniqueName: \"kubernetes.io/projected/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd-kube-api-access-m2ld8\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.616328 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx" (OuterVolumeSpecName: "kube-api-access-slsbx") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "kube-api-access-slsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.616735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities" (OuterVolumeSpecName: "utilities") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.619378 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities" (OuterVolumeSpecName: "utilities") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620873 4829 generic.go:334] "Generic (PLEG): container finished" podID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" exitCode=0 Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620948 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4qsx" event={"ID":"980a7ff9-af1a-413c-8573-00243ed3ece1","Type":"ContainerDied","Data":"9f6b76db525ea1716f4c1ce5158f77a01ac87265be5d53578be8975ef1a1c0b8"} Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.620982 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4qsx" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.622836 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.624761 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5" (OuterVolumeSpecName: "kube-api-access-6jrd5") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "kube-api-access-6jrd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.629359 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dk6vq"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.652785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bedc9476-2a16-46d6-8764-8fd184304b5f" (UID: "bedc9476-2a16-46d6-8764-8fd184304b5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.653348 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655077 4829 scope.go:117] "RemoveContainer" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.655450 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": container with ID starting with c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43 not found: ID does not exist" containerID="c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655617 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43"} err="failed to get container status \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": rpc error: code = NotFound desc = could not find container \"c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43\": container with ID starting with c372e0bfd3ec348a61543c6e7f4fb5ca6476514a321224acea2083b45b22fd43 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.655774 4829 scope.go:117] "RemoveContainer" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.656153 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": container with ID starting with 21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39 not found: ID does not exist" containerID="21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.656194 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39"} err="failed to get container status \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": rpc error: code = NotFound desc = could not find container \"21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39\": container with ID starting with 21184fa6a69a7ee91dfe2981436a50ae882a8ac3d098c7d41e3d651a05ffaa39 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.656221 4829 scope.go:117] "RemoveContainer" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.661505 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zn4qs"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.688398 4829 scope.go:117] "RemoveContainer" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.688961 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.690329 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4qsx"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.705617 4829 scope.go:117] "RemoveContainer" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714603 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714663 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jrd5\" (UniqueName: \"kubernetes.io/projected/bedc9476-2a16-46d6-8764-8fd184304b5f-kube-api-access-6jrd5\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714676 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714687 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedc9476-2a16-46d6-8764-8fd184304b5f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.714705 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slsbx\" (UniqueName: \"kubernetes.io/projected/d8370c4f-c05e-425c-a267-c270e36b5dfd-kube-api-access-slsbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.726747 4829 scope.go:117] "RemoveContainer" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727057 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": container with ID starting with 7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650 not found: ID does not exist" containerID="7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727105 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650"} err="failed to get container status \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": rpc error: code = NotFound desc = could not find container \"7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650\": container with ID starting with 7226fd3c701678589c3e9f339b2f3c14fd225ffee8cbe8b86323984fe7076650 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727134 4829 scope.go:117] "RemoveContainer" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727408 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": container with ID starting with 75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b not found: ID does not exist" containerID="75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727439 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b"} err="failed to get container status \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": rpc error: code = NotFound desc = could not find container \"75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b\": container with ID starting with 75519c48e0226864c59a13f5b122e6d66ff7ba90e50d157b0b03473a801af21b not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727457 4829 scope.go:117] "RemoveContainer" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.727789 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": container with ID starting with 29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186 not found: ID does not exist" containerID="29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727827 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186"} err="failed to get container status \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": rpc error: code = NotFound desc = could not find container \"29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186\": container with ID starting with 29e63b240428746b94e697d7b435f62b5d1278b5e2cd4860dcbc46791a2c6186 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.727857 4829 scope.go:117] "RemoveContainer" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.728189 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.744898 4829 scope.go:117] "RemoveContainer" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.787115 4829 scope.go:117] "RemoveContainer" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.803173 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8370c4f-c05e-425c-a267-c270e36b5dfd" (UID: "d8370c4f-c05e-425c-a267-c270e36b5dfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.815534 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8370c4f-c05e-425c-a267-c270e36b5dfd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.831590 4829 scope.go:117] "RemoveContainer" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.831962 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": container with ID starting with cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582 not found: ID does not exist" containerID="cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.831987 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582"} err="failed to get container status \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": rpc error: code = NotFound desc = could not find container \"cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582\": container with ID starting with cae52a433ea82ad09b9692fcd9817834e7b31c2c00e56d26f2779a393ac19582 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832009 4829 scope.go:117] "RemoveContainer" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.832183 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": container with ID starting with a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e not found: ID does not exist" containerID="a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832202 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e"} err="failed to get container status \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": rpc error: code = NotFound desc = could not find container \"a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e\": container with ID starting with a7183ae2d1db6a208dc16e9f2ba9679c350e33ac9f700eac88b1037af9d4ac2e not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832228 4829 scope.go:117] "RemoveContainer" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.832568 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": container with ID starting with 223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072 not found: ID does not exist" containerID="223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832596 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072"} err="failed to get container status \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": rpc error: code = NotFound desc = could not find container \"223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072\": container with ID starting with 223f8d0bac6f9e2ce1e846d711fbfcabcbc616e521a61f0407f436767147a072 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.832609 4829 scope.go:117] "RemoveContainer" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.847429 4829 scope.go:117] "RemoveContainer" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.863405 4829 scope.go:117] "RemoveContainer" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882396 4829 scope.go:117] "RemoveContainer" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.882769 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": container with ID starting with a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb not found: ID does not exist" containerID="a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882799 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb"} err="failed to get container status \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": rpc error: code = NotFound desc = could not find container \"a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb\": container with ID starting with a59258eaba74e2de8fe404d01008f418a539cb2e58b26c60d2aa9e05f97152eb not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.882822 4829 scope.go:117] "RemoveContainer" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.883231 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": container with ID starting with 954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213 not found: ID does not exist" containerID="954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883265 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213"} err="failed to get container status \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": rpc error: code = NotFound desc = could not find container \"954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213\": container with ID starting with 954ccb17ee98f4fdbf23aa2742afc1880809d3ded833804e952b2a0b54a4b213 not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883289 4829 scope.go:117] "RemoveContainer" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: E0217 16:01:27.883679 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": container with ID starting with 0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f not found: ID does not exist" containerID="0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.883706 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f"} err="failed to get container status \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": rpc error: code = NotFound desc = could not find container \"0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f\": container with ID starting with 0292ad8c854e5c4773a1cb9d6a474d492491278aa1fa68499cca03ff46eba97f not found: ID does not exist" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.916948 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.926743 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.926864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") pod \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\" (UID: \"2a5cfa35-799d-41b4-afa1-e5d056ceed8c\") " Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.928032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities" (OuterVolumeSpecName: "utilities") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.932762 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z" (OuterVolumeSpecName: "kube-api-access-qwm5z") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "kube-api-access-qwm5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.942991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.949752 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pzvbr"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.954344 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:27 crc kubenswrapper[4829]: I0217 16:01:27.957939 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg78k"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.000646 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a5cfa35-799d-41b4-afa1-e5d056ceed8c" (UID: "2a5cfa35-799d-41b4-afa1-e5d056ceed8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028439 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028477 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.028492 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwm5z\" (UniqueName: \"kubernetes.io/projected/2a5cfa35-799d-41b4-afa1-e5d056ceed8c-kube-api-access-qwm5z\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.287768 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" path="/var/lib/kubelet/pods/980a7ff9-af1a-413c-8573-00243ed3ece1/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.288700 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" path="/var/lib/kubelet/pods/bedc9476-2a16-46d6-8764-8fd184304b5f/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.289815 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" path="/var/lib/kubelet/pods/d8370c4f-c05e-425c-a267-c270e36b5dfd/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.291245 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" path="/var/lib/kubelet/pods/dd8fe958-b9ba-48ef-ba18-57fd0eec43dd/volumes" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629353 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plxhn" event={"ID":"2a5cfa35-799d-41b4-afa1-e5d056ceed8c","Type":"ContainerDied","Data":"528d1a220e35598debfbbc4d51d5f58ab0e77306af0907fe6a4260ebd06e34c4"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629571 4829 scope.go:117] "RemoveContainer" containerID="9c32747c47cb46829c25364b98cf862eead8f7abb9263aa939eb942986d29425" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.629693 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plxhn" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" event={"ID":"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9","Type":"ContainerStarted","Data":"3e83c4edbbeb93deede15ac765b6c7670a4281956550ec4df0e58589b435f965"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" event={"ID":"1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9","Type":"ContainerStarted","Data":"2c79ddcdad8cf2554a1531b0732434356c8c56c3cd2c10b167b2192c19a52ed6"} Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.631437 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.638094 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.651939 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.653162 4829 scope.go:117] "RemoveContainer" containerID="6825e589759fde4b15e1827a2242a21f58c78dd3d3ffd21c62f20ccb67341f8d" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.657257 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plxhn"] Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.679566 4829 scope.go:117] "RemoveContainer" containerID="8f8f7324dd8c4c578893f8ce30720af50c624ed6c6cb2764328d69e6ac9dda7f" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.680468 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dk6vq" podStartSLOduration=2.680451236 podStartE2EDuration="2.680451236s" podCreationTimestamp="2026-02-17 16:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:01:28.677115449 +0000 UTC m=+401.094133447" watchObservedRunningTime="2026-02-17 16:01:28.680451236 +0000 UTC m=+401.097469234" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.874501 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875169 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875194 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875211 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875220 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875228 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875235 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875247 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875254 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875265 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875272 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875279 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875286 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875299 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875307 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875317 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875325 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875335 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875342 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875351 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875358 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-content" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875368 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875374 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875386 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875394 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875409 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875416 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="extract-utilities" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875526 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8370c4f-c05e-425c-a267-c270e36b5dfd" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875539 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875551 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875562 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="980a7ff9-af1a-413c-8573-00243ed3ece1" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875577 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedc9476-2a16-46d6-8764-8fd184304b5f" containerName="registry-server" Feb 17 16:01:28 crc kubenswrapper[4829]: E0217 16:01:28.875693 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875704 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.875870 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8fe958-b9ba-48ef-ba18-57fd0eec43dd" containerName="marketplace-operator" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.877788 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.879671 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:01:28 crc kubenswrapper[4829]: I0217 16:01:28.884581 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044678 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044868 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.044969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.079679 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.081179 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.087002 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.095411 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146167 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.146303 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.147169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-utilities\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.147642 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b134949-3436-4e61-9649-5704b6bcb7fd-catalog-content\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.163005 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfd4n\" (UniqueName: \"kubernetes.io/projected/2b134949-3436-4e61-9649-5704b6bcb7fd-kube-api-access-hfd4n\") pod \"redhat-marketplace-v2sjn\" (UID: \"2b134949-3436-4e61-9649-5704b6bcb7fd\") " pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.240032 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.247907 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.248118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.248296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350720 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350880 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.350909 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.352066 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-catalog-content\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.352334 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1207e9e-0755-423d-9a3d-b83ded02c8c2-utilities\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.370675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjbh\" (UniqueName: \"kubernetes.io/projected/b1207e9e-0755-423d-9a3d-b83ded02c8c2-kube-api-access-5cjbh\") pod \"redhat-operators-h59n9\" (UID: \"b1207e9e-0755-423d-9a3d-b83ded02c8c2\") " pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.398417 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.693968 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2sjn"] Feb 17 16:01:29 crc kubenswrapper[4829]: W0217 16:01:29.706303 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b134949_3436_4e61_9649_5704b6bcb7fd.slice/crio-28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437 WatchSource:0}: Error finding container 28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437: Status 404 returned error can't find the container with id 28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437 Feb 17 16:01:29 crc kubenswrapper[4829]: I0217 16:01:29.787753 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h59n9"] Feb 17 16:01:29 crc kubenswrapper[4829]: W0217 16:01:29.796329 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1207e9e_0755_423d_9a3d_b83ded02c8c2.slice/crio-510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547 WatchSource:0}: Error finding container 510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547: Status 404 returned error can't find the container with id 510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.290629 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5cfa35-799d-41b4-afa1-e5d056ceed8c" path="/var/lib/kubelet/pods/2a5cfa35-799d-41b4-afa1-e5d056ceed8c/volumes" Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.651936 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b134949-3436-4e61-9649-5704b6bcb7fd" containerID="b75d79935bed5c3439e427ae88375c4f1bcc50e276aea79ec67d6126fd2e6c71" exitCode=0 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.651985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerDied","Data":"b75d79935bed5c3439e427ae88375c4f1bcc50e276aea79ec67d6126fd2e6c71"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.652023 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerStarted","Data":"28c5160167dcc980fdd211a92cd6781281f6f19b964f0dccc3764d0a78a94437"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653663 4829 generic.go:334] "Generic (PLEG): container finished" podID="b1207e9e-0755-423d-9a3d-b83ded02c8c2" containerID="a0c5e4f1c9b6225d700d459d6678a80a5e30a4f6a8a64b96aaca4c353297cd9d" exitCode=0 Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerDied","Data":"a0c5e4f1c9b6225d700d459d6678a80a5e30a4f6a8a64b96aaca4c353297cd9d"} Feb 17 16:01:30 crc kubenswrapper[4829]: I0217 16:01:30.653761 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"510f5df528788b2dc8087cb7557f0736a5cd3516381bc1c5e5b1f0e5288ea547"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.279774 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.281348 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.284876 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.294162 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.476726 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477358 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477416 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.477438 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.478074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.480832 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.498256 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578158 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578234 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578334 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578380 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.578811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-catalog-content\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.579026 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-utilities\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.596785 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8jx\" (UniqueName: \"kubernetes.io/projected/65b3d23b-0d04-496a-9dbb-fb4ed59d313b-kube-api-access-dj8jx\") pod \"community-operators-vvk9j\" (UID: \"65b3d23b-0d04-496a-9dbb-fb4ed59d313b\") " pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.659728 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.661987 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.662444 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b134949-3436-4e61-9649-5704b6bcb7fd" containerID="aa36779be39aa726f4da4e9126cfdc1b11c13a0995a40ba9c5cfac2963fa23c6" exitCode=0 Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.662562 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerDied","Data":"aa36779be39aa726f4da4e9126cfdc1b11c13a0995a40ba9c5cfac2963fa23c6"} Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.679760 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680008 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.680823 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:31 crc kubenswrapper[4829]: I0217 16:01:31.700033 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"certified-operators-rqfvj\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:31.803109 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.672540 4829 generic.go:334] "Generic (PLEG): container finished" podID="b1207e9e-0755-423d-9a3d-b83ded02c8c2" containerID="6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9" exitCode=0 Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.672775 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerDied","Data":"6e92a65bff47fef7004cae6c45e9a8380b5e22f703ed035ba2b82b102558a2d9"} Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.705092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:01:32 crc kubenswrapper[4829]: W0217 16:01:32.709902 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92bf9e45_4314_4bab_8fda_e0fbf0e5e2b3.slice/crio-bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4 WatchSource:0}: Error finding container bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4: Status 404 returned error can't find the container with id bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4 Feb 17 16:01:32 crc kubenswrapper[4829]: I0217 16:01:32.722045 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvk9j"] Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.679472 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2sjn" event={"ID":"2b134949-3436-4e61-9649-5704b6bcb7fd","Type":"ContainerStarted","Data":"bcf7a7749f6b8b487dc8900e4efc7d463ece516d429a7fc61622c5ad830e92b3"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.682073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h59n9" event={"ID":"b1207e9e-0755-423d-9a3d-b83ded02c8c2","Type":"ContainerStarted","Data":"9a2fd2f20644c0e7382ce5a04a739ef5064ff225acf34d2feda69f9852e192ac"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683119 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" exitCode=0 Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683152 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.683174 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685020 4829 generic.go:334] "Generic (PLEG): container finished" podID="65b3d23b-0d04-496a-9dbb-fb4ed59d313b" containerID="670291e11b65c31fc36061561f528177efcf34e72dacd5cce0d0b9604697fee6" exitCode=0 Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685055 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerDied","Data":"670291e11b65c31fc36061561f528177efcf34e72dacd5cce0d0b9604697fee6"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.685075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"49955cf127697addfddd5d1a4907c67cebb9bc250fbd09a8f01eda5cf86ea055"} Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.700806 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2sjn" podStartSLOduration=3.284217559 podStartE2EDuration="5.700790844s" podCreationTimestamp="2026-02-17 16:01:28 +0000 UTC" firstStartedPulling="2026-02-17 16:01:30.655396682 +0000 UTC m=+403.072414700" lastFinishedPulling="2026-02-17 16:01:33.071970007 +0000 UTC m=+405.488987985" observedRunningTime="2026-02-17 16:01:33.698796926 +0000 UTC m=+406.115814904" watchObservedRunningTime="2026-02-17 16:01:33.700790844 +0000 UTC m=+406.117808812" Feb 17 16:01:33 crc kubenswrapper[4829]: I0217 16:01:33.715981 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h59n9" podStartSLOduration=2.157278156 podStartE2EDuration="4.715960256s" podCreationTimestamp="2026-02-17 16:01:29 +0000 UTC" firstStartedPulling="2026-02-17 16:01:30.655367791 +0000 UTC m=+403.072385769" lastFinishedPulling="2026-02-17 16:01:33.214049891 +0000 UTC m=+405.631067869" observedRunningTime="2026-02-17 16:01:33.714528384 +0000 UTC m=+406.131546362" watchObservedRunningTime="2026-02-17 16:01:33.715960256 +0000 UTC m=+406.132978234" Feb 17 16:01:34 crc kubenswrapper[4829]: I0217 16:01:34.690595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} Feb 17 16:01:34 crc kubenswrapper[4829]: I0217 16:01:34.693261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520"} Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.698874 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" exitCode=0 Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.698963 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.701343 4829 generic.go:334] "Generic (PLEG): container finished" podID="65b3d23b-0d04-496a-9dbb-fb4ed59d313b" containerID="db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520" exitCode=0 Feb 17 16:01:35 crc kubenswrapper[4829]: I0217 16:01:35.701377 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerDied","Data":"db2a1c2fddbdbf82573e82a701c9784deaff940c97ab83d162959b950a33d520"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.710463 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvk9j" event={"ID":"65b3d23b-0d04-496a-9dbb-fb4ed59d313b","Type":"ContainerStarted","Data":"a9926dc89992ffbb3cc636334f0bc2a8a639030228c812b7325445578eceba50"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.712906 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerStarted","Data":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.728072 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvk9j" podStartSLOduration=3.209178967 podStartE2EDuration="5.728049949s" podCreationTimestamp="2026-02-17 16:01:31 +0000 UTC" firstStartedPulling="2026-02-17 16:01:33.68759795 +0000 UTC m=+406.104615918" lastFinishedPulling="2026-02-17 16:01:36.206468902 +0000 UTC m=+408.623486900" observedRunningTime="2026-02-17 16:01:36.726608067 +0000 UTC m=+409.143626045" watchObservedRunningTime="2026-02-17 16:01:36.728049949 +0000 UTC m=+409.145067927" Feb 17 16:01:36 crc kubenswrapper[4829]: I0217 16:01:36.751469 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqfvj" podStartSLOduration=3.288748902 podStartE2EDuration="5.75145382s" podCreationTimestamp="2026-02-17 16:01:31 +0000 UTC" firstStartedPulling="2026-02-17 16:01:33.68451326 +0000 UTC m=+406.101531238" lastFinishedPulling="2026-02-17 16:01:36.147218188 +0000 UTC m=+408.564236156" observedRunningTime="2026-02-17 16:01:36.748125483 +0000 UTC m=+409.165143461" watchObservedRunningTime="2026-02-17 16:01:36.75145382 +0000 UTC m=+409.168471798" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.240258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.240614 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.292956 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.399625 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.399873 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.438328 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.770778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2sjn" Feb 17 16:01:39 crc kubenswrapper[4829]: I0217 16:01:39.779144 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h59n9" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.662210 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.663301 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.710487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.776989 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvk9j" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.803451 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.803629 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:41 crc kubenswrapper[4829]: I0217 16:01:41.847351 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:42 crc kubenswrapper[4829]: I0217 16:01:42.789110 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.659499 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" containerID="cri-o://37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" gracePeriod=30 Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.799628 4829 generic.go:334] "Generic (PLEG): container finished" podID="dc817ced-7abe-422d-af13-779118b5fe0f" containerID="37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" exitCode=0 Feb 17 16:01:48 crc kubenswrapper[4829]: I0217 16:01:48.799639 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerDied","Data":"37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b"} Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.089623 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226475 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226542 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226763 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226831 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226889 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.226940 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") pod \"dc817ced-7abe-422d-af13-779118b5fe0f\" (UID: \"dc817ced-7abe-422d-af13-779118b5fe0f\") " Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.227686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.227727 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.232788 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.234055 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.236545 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.236545 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.238895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g" (OuterVolumeSpecName: "kube-api-access-nxg2g") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "kube-api-access-nxg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.242897 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dc817ced-7abe-422d-af13-779118b5fe0f" (UID: "dc817ced-7abe-422d-af13-779118b5fe0f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328137 4829 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328185 4829 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328208 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxg2g\" (UniqueName: \"kubernetes.io/projected/dc817ced-7abe-422d-af13-779118b5fe0f-kube-api-access-nxg2g\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328228 4829 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc817ced-7abe-422d-af13-779118b5fe0f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328245 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328261 4829 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc817ced-7abe-422d-af13-779118b5fe0f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.328282 4829 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc817ced-7abe-422d-af13-779118b5fe0f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.806868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" event={"ID":"dc817ced-7abe-422d-af13-779118b5fe0f","Type":"ContainerDied","Data":"e1c2032971992b25f6faeb0c4f6543a735b942353043a8e72a8326e32c6d7542"} Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.806910 4829 scope.go:117] "RemoveContainer" containerID="37df374d1d47f237b509d069a1b778c254861701bd77754b7d7433a7bd3d8c7b" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.807010 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zht4j" Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.857645 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:49 crc kubenswrapper[4829]: I0217 16:01:49.863526 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zht4j"] Feb 17 16:01:50 crc kubenswrapper[4829]: I0217 16:01:50.289772 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" path="/var/lib/kubelet/pods/dc817ced-7abe-422d-af13-779118b5fe0f/volumes" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424755 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424828 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.424876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.425443 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.425502 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" gracePeriod=600 Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.827912 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" exitCode=0 Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828003 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c"} Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828263 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} Feb 17 16:01:52 crc kubenswrapper[4829]: I0217 16:01:52.828290 4829 scope.go:117] "RemoveContainer" containerID="e2678f2aaf5356aa770327b692162ea33f1817868df15ef2b2b05176ceb4924f" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.743550 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:01:59 crc kubenswrapper[4829]: E0217 16:01:59.744851 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.744884 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.745190 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc817ced-7abe-422d-af13-779118b5fe0f" containerName="registry" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.746112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752307 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752682 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.752953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.754070 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.754432 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.755122 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.875664 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.976992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.978501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/6cefa21f-9e59-4010-ad20-b8e03cf353bf-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.985883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cefa21f-9e59-4010-ad20-b8e03cf353bf-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:01:59 crc kubenswrapper[4829]: I0217 16:01:59.995188 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzktw\" (UniqueName: \"kubernetes.io/projected/6cefa21f-9e59-4010-ad20-b8e03cf353bf-kube-api-access-fzktw\") pod \"cluster-monitoring-operator-6d5b84845-crsct\" (UID: \"6cefa21f-9e59-4010-ad20-b8e03cf353bf\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.077906 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.515521 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct"] Feb 17 16:02:00 crc kubenswrapper[4829]: I0217 16:02:00.890287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" event={"ID":"6cefa21f-9e59-4010-ad20-b8e03cf353bf","Type":"ContainerStarted","Data":"0d803e081171f0fdf381a62bffe3d2d8eedba8c413c242abf0a94f07bb34bcc6"} Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.901925 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.902994 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.903846 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" event={"ID":"6cefa21f-9e59-4010-ad20-b8e03cf353bf","Type":"ContainerStarted","Data":"d1b1543149dadfea086e9cdabc894c26e75a4b9a196b98f736069a00ce8de741"} Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.906070 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-82jtk" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.906262 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.921498 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:02 crc kubenswrapper[4829]: I0217 16:02:02.947693 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-crsct" podStartSLOduration=2.144602704 podStartE2EDuration="3.947673739s" podCreationTimestamp="2026-02-17 16:01:59 +0000 UTC" firstStartedPulling="2026-02-17 16:02:00.524305925 +0000 UTC m=+432.941323943" lastFinishedPulling="2026-02-17 16:02:02.327377 +0000 UTC m=+434.744394978" observedRunningTime="2026-02-17 16:02:02.945527347 +0000 UTC m=+435.362545315" watchObservedRunningTime="2026-02-17 16:02:02.947673739 +0000 UTC m=+435.364691717" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.016439 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.117901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: E0217 16:02:03.118046 4829 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:02:03 crc kubenswrapper[4829]: E0217 16:02:03.118116 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates podName:728a0007-d901-4c84-aa7d-13a845147d80 nodeName:}" failed. No retries permitted until 2026-02-17 16:02:03.618096438 +0000 UTC m=+436.035114426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-lrr94" (UID: "728a0007-d901-4c84-aa7d-13a845147d80") : secret "prometheus-operator-admission-webhook-tls" not found Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.624959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.632730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/728a0007-d901-4c84-aa7d-13a845147d80-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-lrr94\" (UID: \"728a0007-d901-4c84-aa7d-13a845147d80\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:03 crc kubenswrapper[4829]: I0217 16:02:03.816728 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:04 crc kubenswrapper[4829]: I0217 16:02:04.081269 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94"] Feb 17 16:02:04 crc kubenswrapper[4829]: I0217 16:02:04.922962 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" event={"ID":"728a0007-d901-4c84-aa7d-13a845147d80","Type":"ContainerStarted","Data":"1cd6e487380d264ff565e75d4d8ef446ab7e75727b950fa9760858b1c7c2fea3"} Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.930441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" event={"ID":"728a0007-d901-4c84-aa7d-13a845147d80","Type":"ContainerStarted","Data":"7a70d732a62c33929e736cfd50090b9f5f9258478f9e6bc747a698b122b8489f"} Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.930904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.940211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" Feb 17 16:02:05 crc kubenswrapper[4829]: I0217 16:02:05.950302 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-lrr94" podStartSLOduration=2.337651954 podStartE2EDuration="3.950287697s" podCreationTimestamp="2026-02-17 16:02:02 +0000 UTC" firstStartedPulling="2026-02-17 16:02:04.087548996 +0000 UTC m=+436.504566994" lastFinishedPulling="2026-02-17 16:02:05.700184759 +0000 UTC m=+438.117202737" observedRunningTime="2026-02-17 16:02:05.94592911 +0000 UTC m=+438.362947098" watchObservedRunningTime="2026-02-17 16:02:05.950287697 +0000 UTC m=+438.367305685" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.009828 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.010638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013038 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013296 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.013952 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-zqv84" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.025213 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.097948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098038 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098117 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.098178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199798 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199860 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.199894 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: E0217 16:02:07.199998 4829 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 17 16:02:07 crc kubenswrapper[4829]: E0217 16:02:07.200050 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls podName:bb5ca468-da43-4076-b607-21a3a3799c55 nodeName:}" failed. No retries permitted until 2026-02-17 16:02:07.7000296 +0000 UTC m=+440.117047578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls") pod "prometheus-operator-db54df47d-nrldr" (UID: "bb5ca468-da43-4076-b607-21a3a3799c55") : secret "prometheus-operator-tls" not found Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.200900 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bb5ca468-da43-4076-b607-21a3a3799c55-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.222400 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4gv6\" (UniqueName: \"kubernetes.io/projected/bb5ca468-da43-4076-b607-21a3a3799c55-kube-api-access-w4gv6\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.223145 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.706689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.721267 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb5ca468-da43-4076-b607-21a3a3799c55-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nrldr\" (UID: \"bb5ca468-da43-4076-b607-21a3a3799c55\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:07 crc kubenswrapper[4829]: I0217 16:02:07.925821 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" Feb 17 16:02:08 crc kubenswrapper[4829]: I0217 16:02:08.379614 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nrldr"] Feb 17 16:02:08 crc kubenswrapper[4829]: W0217 16:02:08.382754 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb5ca468_da43_4076_b607_21a3a3799c55.slice/crio-a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15 WatchSource:0}: Error finding container a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15: Status 404 returned error can't find the container with id a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15 Feb 17 16:02:08 crc kubenswrapper[4829]: I0217 16:02:08.951556 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"a62b9e128685896bda251e027f81b4daa4c43a2b564b6ccf3017380ed4c7fd15"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.962678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"7f3b14e607153a2972e1f1e90a136cf52bd5328f5de3675740e42c522750e0c1"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.963361 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" event={"ID":"bb5ca468-da43-4076-b607-21a3a3799c55","Type":"ContainerStarted","Data":"2117fd359e56760977e0aba46c4265804b49ec407e0f222862b3897c8c0232f0"} Feb 17 16:02:10 crc kubenswrapper[4829]: I0217 16:02:10.990014 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nrldr" podStartSLOduration=3.35964792 podStartE2EDuration="4.989996788s" podCreationTimestamp="2026-02-17 16:02:06 +0000 UTC" firstStartedPulling="2026-02-17 16:02:08.385184296 +0000 UTC m=+440.802202274" lastFinishedPulling="2026-02-17 16:02:10.015533164 +0000 UTC m=+442.432551142" observedRunningTime="2026-02-17 16:02:10.983684255 +0000 UTC m=+443.400702283" watchObservedRunningTime="2026-02-17 16:02:10.989996788 +0000 UTC m=+443.407014776" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.347831 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.349021 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353484 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353712 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-97ncs" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.353852 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.355008 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.375096 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.376343 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382637 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.382973 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.383034 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q62sj" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.395789 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.446376 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-hww7w"] Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.447365 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.449522 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.449663 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.455832 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gcggn" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.490952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491347 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491453 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491567 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491683 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.491929 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492025 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492058 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.492074 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593182 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593281 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593347 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593415 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593434 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593478 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593496 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593863 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593859 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.593989 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.594691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/556c56e9-a5b5-4038-a036-176255a8d491-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.594689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.595207 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.599964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.600842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/556c56e9-a5b5-4038-a036-176255a8d491-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.601694 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.602898 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.613993 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9w66\" (UniqueName: \"kubernetes.io/projected/0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0-kube-api-access-n9w66\") pod \"kube-state-metrics-777cb5bd5d-9nxbp\" (UID: \"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.614495 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszwq\" (UniqueName: \"kubernetes.io/projected/556c56e9-a5b5-4038-a036-176255a8d491-kube-api-access-jszwq\") pod \"openshift-state-metrics-566fddb674-rkgbq\" (UID: \"556c56e9-a5b5-4038-a036-176255a8d491\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.671901 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.694694 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695190 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.694942 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-root\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-wtmp\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.695989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d943ca51-64b2-4a03-a7cd-9fdc430742a5-sys\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-textfile\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.696887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d943ca51-64b2-4a03-a7cd-9fdc430742a5-metrics-client-ca\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.699544 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.699545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d943ca51-64b2-4a03-a7cd-9fdc430742a5-node-exporter-tls\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.713354 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrlbx\" (UniqueName: \"kubernetes.io/projected/d943ca51-64b2-4a03-a7cd-9fdc430742a5-kube-api-access-hrlbx\") pod \"node-exporter-hww7w\" (UID: \"d943ca51-64b2-4a03-a7cd-9fdc430742a5\") " pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.762773 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-hww7w" Feb 17 16:02:13 crc kubenswrapper[4829]: I0217 16:02:13.983917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"caef9da3426de438b5353f2604f619d63a795417f75c2a7ef8a37a75d97991be"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.079010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp"] Feb 17 16:02:14 crc kubenswrapper[4829]: W0217 16:02:14.083153 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c36ac2a_a1c8_4e56_a6fd_077e321dbeb0.slice/crio-3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291 WatchSource:0}: Error finding container 3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291: Status 404 returned error can't find the container with id 3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291 Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.136943 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq"] Feb 17 16:02:14 crc kubenswrapper[4829]: W0217 16:02:14.142180 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod556c56e9_a5b5_4038_a036_176255a8d491.slice/crio-7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd WatchSource:0}: Error finding container 7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd: Status 404 returned error can't find the container with id 7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.410263 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.412175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.416385 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420221 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420402 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.420843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.421075 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.421228 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.422664 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-dd55m" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.423050 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.424965 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.446179 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510588 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510731 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.510937 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511001 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.511225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612318 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612362 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612400 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612428 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612448 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.612482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.613208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.613702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.614597 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617231 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-volume\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.617703 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.618767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.619024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.619966 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-web-config\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.620433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-config-out\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.621324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.628564 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7k8\" (UniqueName: \"kubernetes.io/projected/6ed9f3be-0a53-4ab0-98d0-7f3644b24cab-kube-api-access-7h7k8\") pod \"alertmanager-main-0\" (UID: \"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab\") " pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.732625 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.993949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"3137e808bf3b5b4a67a654176f9adc0917236b57a3e6ee181f5ae2746e9c4291"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"f3a029cb5f5ac465316b3fdcbc5bfeee9a734902b2ea8c58f62be1b62341cda5"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997630 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"c9f4c81ba3712eba7fbe0f174f70aec6a812e8bc5cf3462612206c40e4b84968"} Feb 17 16:02:14 crc kubenswrapper[4829]: I0217 16:02:14.997641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"7faa5cc518b8a7a6b51158fa4518c8072065861468d76ccd843beb0029d670dd"} Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.208071 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.429921 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.439620 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.439965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.443767 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444084 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444416 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-ag8cv1l60vbo7" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444561 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444835 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4ss68" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.444759 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529878 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529944 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.529966 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530062 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530253 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.530318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: W0217 16:02:15.589306 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed9f3be_0a53_4ab0_98d0_7f3644b24cab.slice/crio-775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62 WatchSource:0}: Error finding container 775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62: Status 404 returned error can't find the container with id 775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62 Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631567 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631654 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631746 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631822 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.631883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.633211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dbf29c87-fafc-4650-9e33-9a12afaacff2-metrics-client-ca\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.636733 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.639218 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.640484 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.641088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.644899 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.645458 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/dbf29c87-fafc-4650-9e33-9a12afaacff2-secret-grpc-tls\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.653175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpp2\" (UniqueName: \"kubernetes.io/projected/dbf29c87-fafc-4650-9e33-9a12afaacff2-kube-api-access-sjpp2\") pod \"thanos-querier-866c8c9dc-fq52p\" (UID: \"dbf29c87-fafc-4650-9e33-9a12afaacff2\") " pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:15 crc kubenswrapper[4829]: I0217 16:02:15.770807 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:16 crc kubenswrapper[4829]: I0217 16:02:16.028820 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"775295082bed5a7594f2c12452e5d2d4a405c61ad2cc7ce60e3af71fe740bd62"} Feb 17 16:02:16 crc kubenswrapper[4829]: I0217 16:02:16.366901 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-866c8c9dc-fq52p"] Feb 17 16:02:16 crc kubenswrapper[4829]: W0217 16:02:16.376959 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf29c87_fafc_4650_9e33_9a12afaacff2.slice/crio-6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9 WatchSource:0}: Error finding container 6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9: Status 404 returned error can't find the container with id 6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9 Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.044330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"6cad66e4891f598247e5680e9193a150417a829903c44bf22ebf200fd85cc8b9"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.046836 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" event={"ID":"556c56e9-a5b5-4038-a036-176255a8d491","Type":"ContainerStarted","Data":"91a90ae48ff47b7a38a1b1567709e1ceb8a7b36169b06f36b5c51683e653d9bf"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.049013 4829 generic.go:334] "Generic (PLEG): container finished" podID="d943ca51-64b2-4a03-a7cd-9fdc430742a5" containerID="0e5d88d101bc75b54345559672b0940d377e7c4ec415bbf75091b74dace05853" exitCode=0 Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.049151 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerDied","Data":"0e5d88d101bc75b54345559672b0940d377e7c4ec415bbf75091b74dace05853"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058226 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"2874148a0a9604029daeda794ece867eb6a4d34044e6495008a35805db480a58"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058329 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"bdf680ddbe2042baa65364ea0790d22eb955450941e135a37ec4cb0478856685"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.058345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" event={"ID":"0c36ac2a-a1c8-4e56-a6fd-077e321dbeb0","Type":"ContainerStarted","Data":"2cbc7f05e03100e5f030e25c30b13de0fc17c86d99293c08f284f9d67461e53c"} Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.066449 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-rkgbq" podStartSLOduration=2.5544200889999997 podStartE2EDuration="4.066418995s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:14.443868706 +0000 UTC m=+446.860886684" lastFinishedPulling="2026-02-17 16:02:15.955867602 +0000 UTC m=+448.372885590" observedRunningTime="2026-02-17 16:02:17.065342484 +0000 UTC m=+449.482360462" watchObservedRunningTime="2026-02-17 16:02:17.066418995 +0000 UTC m=+449.483436973" Feb 17 16:02:17 crc kubenswrapper[4829]: I0217 16:02:17.106338 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9nxbp" podStartSLOduration=2.293199559 podStartE2EDuration="4.106312776s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:14.085866439 +0000 UTC m=+446.502884427" lastFinishedPulling="2026-02-17 16:02:15.898979666 +0000 UTC m=+448.315997644" observedRunningTime="2026-02-17 16:02:17.083936355 +0000 UTC m=+449.500954353" watchObservedRunningTime="2026-02-17 16:02:17.106312776 +0000 UTC m=+449.523330744" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.066793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"c11b32d64a3f66100a0165920c49ce76a1a62f5950c031c2f7d9ea1cc4115fdc"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.067245 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-hww7w" event={"ID":"d943ca51-64b2-4a03-a7cd-9fdc430742a5","Type":"ContainerStarted","Data":"1bcc54d9357e5cc7264fcded6f2e7889686ca8dea70f6549d65e017a70c7c568"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.068091 4829 generic.go:334] "Generic (PLEG): container finished" podID="6ed9f3be-0a53-4ab0-98d0-7f3644b24cab" containerID="6b424b27de387b02b4b52768ced291fe81d653efeffb0de595f53abb04a48b44" exitCode=0 Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.068141 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerDied","Data":"6b424b27de387b02b4b52768ced291fe81d653efeffb0de595f53abb04a48b44"} Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.104288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-hww7w" podStartSLOduration=3.734504248 podStartE2EDuration="5.104270084s" podCreationTimestamp="2026-02-17 16:02:13 +0000 UTC" firstStartedPulling="2026-02-17 16:02:13.80679773 +0000 UTC m=+446.223815708" lastFinishedPulling="2026-02-17 16:02:15.176563566 +0000 UTC m=+447.593581544" observedRunningTime="2026-02-17 16:02:18.093346556 +0000 UTC m=+450.510364534" watchObservedRunningTime="2026-02-17 16:02:18.104270084 +0000 UTC m=+450.521288072" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.177053 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.178161 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.236298 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273265 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273418 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.273500 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375340 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375360 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.375409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.376633 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.376688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.377112 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.378989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.381184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.382133 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.391865 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"console-847cdd58c-slpz9\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.505167 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.556755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.557562 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560455 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560456 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-627cz" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.560911 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fkhkec7ff3h1k" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.564462 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.570150 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.582082 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.684993 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.685051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686106 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686150 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.686209 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787311 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787701 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787761 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787811 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.787899 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.788772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/b1a57ae3-3984-406d-b3f4-a4c226234382-audit-log\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.789221 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.789643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/b1a57ae3-3984-406d-b3f4-a4c226234382-metrics-server-audit-profiles\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.793147 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-server-tls\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.793475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-client-ca-bundle\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.794381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b1a57ae3-3984-406d-b3f4-a4c226234382-secret-metrics-client-certs\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.812201 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gkt\" (UniqueName: \"kubernetes.io/projected/b1a57ae3-3984-406d-b3f4-a4c226234382-kube-api-access-96gkt\") pod \"metrics-server-77856db6f9-6hhhb\" (UID: \"b1a57ae3-3984-406d-b3f4-a4c226234382\") " pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:18 crc kubenswrapper[4829]: I0217 16:02:18.886448 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.027521 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:02:19 crc kubenswrapper[4829]: W0217 16:02:19.048379 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b2f8413_6a54_4bef_a63e_f2b278f57a6d.slice/crio-bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd WatchSource:0}: Error finding container bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd: Status 404 returned error can't find the container with id bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.075018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerStarted","Data":"bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.077651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"6ca5a643784c8c5367f3e65a1fd29d033304a15413638a481bdc97d04027bd70"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.077671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"f598151b685b169b84e85f8d23310056f43371ae6cd306df0ed7cd0b72b8789f"} Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.139324 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.140284 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.146923 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.147083 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.149694 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.295279 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.304005 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-77856db6f9-6hhhb"] Feb 17 16:02:19 crc kubenswrapper[4829]: W0217 16:02:19.312815 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a57ae3_3984_406d_b3f4_a4c226234382.slice/crio-58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c WatchSource:0}: Error finding container 58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c: Status 404 returned error can't find the container with id 58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.396688 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.404696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/211288e8-fde3-46bb-99ee-46749e19112a-monitoring-plugin-cert\") pod \"monitoring-plugin-7dbdd84b7f-bzxpg\" (UID: \"211288e8-fde3-46bb-99ee-46749e19112a\") " pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:19 crc kubenswrapper[4829]: I0217 16:02:19.463627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.084205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerStarted","Data":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.086858 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"35be71e2c35fded3288cd100d8af21765ea2dd1c1f28ab6ae6f19e3bd820524b"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.087914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" event={"ID":"b1a57ae3-3984-406d-b3f4-a4c226234382","Type":"ContainerStarted","Data":"58eb44c902c64bad760fd517fd247ff82fb4a581533683664c887a007bc85c4c"} Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.351444 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.354200 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-847cdd58c-slpz9" podStartSLOduration=2.35417684 podStartE2EDuration="2.35417684s" podCreationTimestamp="2026-02-17 16:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:02:20.347255668 +0000 UTC m=+452.764273666" watchObservedRunningTime="2026-02-17 16:02:20.35417684 +0000 UTC m=+452.771194828" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.423169 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.425096 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429531 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429530 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429946 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.429819 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430760 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430811 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430776 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.430954 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.431087 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-f4i6b27l8t32" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.434567 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.434858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-4r2hf" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.438030 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.455637 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512189 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512228 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512247 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512294 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512319 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512539 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512620 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512645 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512668 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512722 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512752 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.512789 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.513013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614785 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614839 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614901 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614950 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.614987 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615005 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615153 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615192 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615211 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615228 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.615244 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.616506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.616795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.620942 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.621412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.622097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.623607 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a265a122-2cfe-440c-bf5a-881b4144381d-config-out\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.623895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.624122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.624368 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.625307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.625407 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.628808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a265a122-2cfe-440c-bf5a-881b4144381d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.629914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.634924 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.637803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-web-config\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.637851 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.640494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a265a122-2cfe-440c-bf5a-881b4144381d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.644881 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9tj\" (UniqueName: \"kubernetes.io/projected/a265a122-2cfe-440c-bf5a-881b4144381d-kube-api-access-9z9tj\") pod \"prometheus-k8s-0\" (UID: \"a265a122-2cfe-440c-bf5a-881b4144381d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:20 crc kubenswrapper[4829]: I0217 16:02:20.761640 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:21 crc kubenswrapper[4829]: I0217 16:02:21.104867 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" event={"ID":"211288e8-fde3-46bb-99ee-46749e19112a","Type":"ContainerStarted","Data":"39e71a9ea17669c833f90c25cdc68a462caa52e2e6e5b3b06aab4d32f4b719f2"} Feb 17 16:02:21 crc kubenswrapper[4829]: I0217 16:02:21.784960 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.113920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"bcedfe4dd7d684dfd2615edaeb615a5d7fac07977499c79e3e153a541943d634"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114557 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"8fe1c29cc340dae45e8ecfa05205bde4738620c7c0536f3f5ce9c1e0d7173d6c"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114646 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.114666 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" event={"ID":"dbf29c87-fafc-4650-9e33-9a12afaacff2","Type":"ContainerStarted","Data":"fbe0004479cd1f6f0c8bf879a286aa3242234a6eee3233f2f69b53385237ea61"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.117805 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"f3523d8fe3c805b586550c700a868eee49125e80932010b843383f496fe72419"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118121 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"aa34e3b50980dd1f90989d4ceee4bf62df376386a2feb13487028480533552e0"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118228 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"33ebcfc502784b4dd5372cf4a2f474ae88104cfb490bced4e208f755865122ec"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.118250 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"6120f4986ef69dd47cd4bcf3a1ca1de2e1dfdd2b23cb22814581233e336a28b7"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120249 4829 generic.go:334] "Generic (PLEG): container finished" podID="a265a122-2cfe-440c-bf5a-881b4144381d" containerID="0782dc4434f3d1e0a5210a185283fae0b51d1016aa679d5509138a4fa3406164" exitCode=0 Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120290 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerDied","Data":"0782dc4434f3d1e0a5210a185283fae0b51d1016aa679d5509138a4fa3406164"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.120314 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"76ee5173001ec2023f4e2a7fc75fe3110b0d771da3686de8a16f837c89445cd7"} Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.123078 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:02:22 crc kubenswrapper[4829]: I0217 16:02:22.141257 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" podStartSLOduration=2.155389573 podStartE2EDuration="7.141219747s" podCreationTimestamp="2026-02-17 16:02:15 +0000 UTC" firstStartedPulling="2026-02-17 16:02:16.381851296 +0000 UTC m=+448.798869274" lastFinishedPulling="2026-02-17 16:02:21.36768147 +0000 UTC m=+453.784699448" observedRunningTime="2026-02-17 16:02:22.140294591 +0000 UTC m=+454.557312579" watchObservedRunningTime="2026-02-17 16:02:22.141219747 +0000 UTC m=+454.558237725" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.128751 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" event={"ID":"211288e8-fde3-46bb-99ee-46749e19112a","Type":"ContainerStarted","Data":"30d3dbf407a8ec4ea029ebdfe4eb064a03fe839804ff15c0263be170a6102483"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.129233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136765 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136811 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"f0515a9fa8de8362c9dc0421cf5cef0144cef9ee713a8539a0d492332136e0cb"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.136838 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6ed9f3be-0a53-4ab0-98d0-7f3644b24cab","Type":"ContainerStarted","Data":"bf2b980f826c0aa4ea0b10dd4cad63ee3aa66053375dbe519125062c9bef0e38"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.139603 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" event={"ID":"b1a57ae3-3984-406d-b3f4-a4c226234382","Type":"ContainerStarted","Data":"2a6939912041c5d0fcee4ebd5a43630c4e8c02b1305f160b3f8ebb4b64b01f74"} Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.153325 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dbdd84b7f-bzxpg" podStartSLOduration=2.53813861 podStartE2EDuration="4.153298736s" podCreationTimestamp="2026-02-17 16:02:19 +0000 UTC" firstStartedPulling="2026-02-17 16:02:21.070527764 +0000 UTC m=+453.487545742" lastFinishedPulling="2026-02-17 16:02:22.68568789 +0000 UTC m=+455.102705868" observedRunningTime="2026-02-17 16:02:23.151170315 +0000 UTC m=+455.568188293" watchObservedRunningTime="2026-02-17 16:02:23.153298736 +0000 UTC m=+455.570316714" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.158252 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-866c8c9dc-fq52p" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.182898 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.406306095 podStartE2EDuration="9.182878157s" podCreationTimestamp="2026-02-17 16:02:14 +0000 UTC" firstStartedPulling="2026-02-17 16:02:15.592026865 +0000 UTC m=+448.009044843" lastFinishedPulling="2026-02-17 16:02:21.368598927 +0000 UTC m=+453.785616905" observedRunningTime="2026-02-17 16:02:23.178062727 +0000 UTC m=+455.595080705" watchObservedRunningTime="2026-02-17 16:02:23.182878157 +0000 UTC m=+455.599896135" Feb 17 16:02:23 crc kubenswrapper[4829]: I0217 16:02:23.228332 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" podStartSLOduration=1.866099448 podStartE2EDuration="5.228313459s" podCreationTimestamp="2026-02-17 16:02:18 +0000 UTC" firstStartedPulling="2026-02-17 16:02:19.317179456 +0000 UTC m=+451.734197434" lastFinishedPulling="2026-02-17 16:02:22.679393467 +0000 UTC m=+455.096411445" observedRunningTime="2026-02-17 16:02:23.209823192 +0000 UTC m=+455.626841170" watchObservedRunningTime="2026-02-17 16:02:23.228313459 +0000 UTC m=+455.645331437" Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.159946 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"8eaad2b8829c9f518ec03453a920606d566191cd710a47b003dfc5d0a48eca77"} Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.160449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"b4902392ae9e9faddbbaaf51c72a9490f48305b944f65a991fcc6e0497512878"} Feb 17 16:02:26 crc kubenswrapper[4829]: I0217 16:02:26.160462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"3ec330fc97ce84a7db0f6e465a1250c1dec7d059b774b5ce7b3c091d402ec3cf"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"af44c0435d95f9f06200cc1ef71b94fac11efd1c984df9938c5dac85acdd2e2c"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"dd5d6fa06f86e1582cc0f51c47a81ddbe84b4e0b6b0d3852faad86cebff02590"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.170469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"a265a122-2cfe-440c-bf5a-881b4144381d","Type":"ContainerStarted","Data":"1b007719f1f543d9b5475072dd81547d29c3ec96cb5f1a09119fe58fc39bd0c3"} Feb 17 16:02:27 crc kubenswrapper[4829]: I0217 16:02:27.199178 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.75717947 podStartE2EDuration="7.199160459s" podCreationTimestamp="2026-02-17 16:02:20 +0000 UTC" firstStartedPulling="2026-02-17 16:02:22.12274496 +0000 UTC m=+454.539762948" lastFinishedPulling="2026-02-17 16:02:25.564725939 +0000 UTC m=+457.981743937" observedRunningTime="2026-02-17 16:02:27.196474327 +0000 UTC m=+459.613492315" watchObservedRunningTime="2026-02-17 16:02:27.199160459 +0000 UTC m=+459.616178437" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.505684 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.506150 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:28 crc kubenswrapper[4829]: I0217 16:02:28.514338 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:29 crc kubenswrapper[4829]: I0217 16:02:29.192723 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:02:29 crc kubenswrapper[4829]: I0217 16:02:29.321143 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:30 crc kubenswrapper[4829]: I0217 16:02:30.763622 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:02:38 crc kubenswrapper[4829]: I0217 16:02:38.886700 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:38 crc kubenswrapper[4829]: I0217 16:02:38.887402 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:54 crc kubenswrapper[4829]: I0217 16:02:54.384611 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9fgb2" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" containerID="cri-o://054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" gracePeriod=15 Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385513 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385817 4829 generic.go:334] "Generic (PLEG): container finished" podID="96919462-7665-4b8f-8a8a-7c865d29393f" containerID="054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" exitCode=2 Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.385884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerDied","Data":"054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587"} Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.489277 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.489385 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.536834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.536883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537007 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537039 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537113 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537198 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.537257 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") pod \"96919462-7665-4b8f-8a8a-7c865d29393f\" (UID: \"96919462-7665-4b8f-8a8a-7c865d29393f\") " Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538236 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca" (OuterVolumeSpecName: "service-ca") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538404 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538423 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config" (OuterVolumeSpecName: "console-config") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.538452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.544985 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.547986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.552895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6" (OuterVolumeSpecName: "kube-api-access-99rq6") pod "96919462-7665-4b8f-8a8a-7c865d29393f" (UID: "96919462-7665-4b8f-8a8a-7c865d29393f"). InnerVolumeSpecName "kube-api-access-99rq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.639864 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99rq6\" (UniqueName: \"kubernetes.io/projected/96919462-7665-4b8f-8a8a-7c865d29393f-kube-api-access-99rq6\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640935 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640983 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.640995 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641009 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641022 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/96919462-7665-4b8f-8a8a-7c865d29393f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:56 crc kubenswrapper[4829]: I0217 16:02:56.641035 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/96919462-7665-4b8f-8a8a-7c865d29393f-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393794 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9fgb2_96919462-7665-4b8f-8a8a-7c865d29393f/console/0.log" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393873 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9fgb2" event={"ID":"96919462-7665-4b8f-8a8a-7c865d29393f","Type":"ContainerDied","Data":"a4dd5884310a79cb7487b5f3cbe05eafb8d2a2c5440edad3ee0322f1cc8a15db"} Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.393912 4829 scope.go:117] "RemoveContainer" containerID="054b516560d535dac8b939ba1e908698b9266e3c9318b11dc3da25e6a8620587" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.394038 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9fgb2" Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.438565 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:57 crc kubenswrapper[4829]: I0217 16:02:57.443765 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9fgb2"] Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.314988 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" path="/var/lib/kubelet/pods/96919462-7665-4b8f-8a8a-7c865d29393f/volumes" Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.897299 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:02:58 crc kubenswrapper[4829]: I0217 16:02:58.911970 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-77856db6f9-6hhhb" Feb 17 16:03:20 crc kubenswrapper[4829]: I0217 16:03:20.763852 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:20 crc kubenswrapper[4829]: I0217 16:03:20.806174 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:21 crc kubenswrapper[4829]: I0217 16:03:21.627705 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.225894 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:38 crc kubenswrapper[4829]: E0217 16:03:38.227100 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.227124 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.227303 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="96919462-7665-4b8f-8a8a-7c865d29393f" containerName="console" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.228002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.244655 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335801 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335820 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335840 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.335931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.336034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.437790 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.437897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438088 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438180 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438296 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.438699 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.439444 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.439707 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.440411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.445279 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.445769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.460025 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"console-797db4bf78-znlsn\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.552209 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:38 crc kubenswrapper[4829]: I0217 16:03:38.777389 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.735051 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerStarted","Data":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.735422 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerStarted","Data":"bfae83dcdb0a183b25666f792e4baf03784ae0581990e298c8186a70a2bee65f"} Feb 17 16:03:39 crc kubenswrapper[4829]: I0217 16:03:39.773895 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-797db4bf78-znlsn" podStartSLOduration=1.773862941 podStartE2EDuration="1.773862941s" podCreationTimestamp="2026-02-17 16:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:03:39.762560206 +0000 UTC m=+532.179578274" watchObservedRunningTime="2026-02-17 16:03:39.773862941 +0000 UTC m=+532.190880959" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.552795 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.555806 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.562941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.818562 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:03:48 crc kubenswrapper[4829]: I0217 16:03:48.902863 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:03:52 crc kubenswrapper[4829]: I0217 16:03:52.424454 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:03:52 crc kubenswrapper[4829]: I0217 16:03:52.424810 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:13 crc kubenswrapper[4829]: I0217 16:04:13.973959 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-847cdd58c-slpz9" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" containerID="cri-o://f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" gracePeriod=15 Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.414778 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-847cdd58c-slpz9_7b2f8413-6a54-4bef-a63e-f2b278f57a6d/console/0.log" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.415218 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543278 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543404 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543441 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543517 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543610 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543658 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.543691 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") pod \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\" (UID: \"7b2f8413-6a54-4bef-a63e-f2b278f57a6d\") " Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544676 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544704 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config" (OuterVolumeSpecName: "console-config") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544759 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca" (OuterVolumeSpecName: "service-ca") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.544824 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.550249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj" (OuterVolumeSpecName: "kube-api-access-dnhjj") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "kube-api-access-dnhjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.552910 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.553796 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7b2f8413-6a54-4bef-a63e-f2b278f57a6d" (UID: "7b2f8413-6a54-4bef-a63e-f2b278f57a6d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.644959 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.644992 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnhjj\" (UniqueName: \"kubernetes.io/projected/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-kube-api-access-dnhjj\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645001 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645010 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645018 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645026 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:14 crc kubenswrapper[4829]: I0217 16:04:14.645034 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7b2f8413-6a54-4bef-a63e-f2b278f57a6d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045419 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-847cdd58c-slpz9_7b2f8413-6a54-4bef-a63e-f2b278f57a6d/console/0.log" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045512 4829 generic.go:334] "Generic (PLEG): container finished" podID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" exitCode=2 Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045566 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerDied","Data":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-847cdd58c-slpz9" event={"ID":"7b2f8413-6a54-4bef-a63e-f2b278f57a6d","Type":"ContainerDied","Data":"bf992b7cf5d41d19f78e161c41369ada93d18d4accc3edca33df6e29ddb941dd"} Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045716 4829 scope.go:117] "RemoveContainer" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.045747 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-847cdd58c-slpz9" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.082632 4829 scope.go:117] "RemoveContainer" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: E0217 16:04:15.083233 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": container with ID starting with f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c not found: ID does not exist" containerID="f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.083307 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c"} err="failed to get container status \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": rpc error: code = NotFound desc = could not find container \"f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c\": container with ID starting with f78c550251012f1525048fc247c4f0a7c6cd76f1f0a6325e105de9379ce70f6c not found: ID does not exist" Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.122239 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:04:15 crc kubenswrapper[4829]: I0217 16:04:15.136323 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-847cdd58c-slpz9"] Feb 17 16:04:16 crc kubenswrapper[4829]: I0217 16:04:16.295027 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" path="/var/lib/kubelet/pods/7b2f8413-6a54-4bef-a63e-f2b278f57a6d/volumes" Feb 17 16:04:22 crc kubenswrapper[4829]: I0217 16:04:22.425315 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:04:22 crc kubenswrapper[4829]: I0217 16:04:22.426007 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.425256 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.425975 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426041 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426794 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:04:52 crc kubenswrapper[4829]: I0217 16:04:52.426892 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" gracePeriod=600 Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349056 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" exitCode=0 Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349133 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e"} Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} Feb 17 16:04:53 crc kubenswrapper[4829]: I0217 16:04:53.349962 4829 scope.go:117] "RemoveContainer" containerID="82a3319848c2bfc3a4d283b125b8c2f2608eba86a59e07c7bb4a89100deb860c" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.882535 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:13 crc kubenswrapper[4829]: E0217 16:06:13.883708 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.883733 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.883967 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2f8413-6a54-4bef-a63e-f2b278f57a6d" containerName="console" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.885462 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.894818 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.896370 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943395 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943459 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:13 crc kubenswrapper[4829]: I0217 16:06:13.943487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044738 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044868 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.044925 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.045763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.045797 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.072700 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.205288 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:14 crc kubenswrapper[4829]: I0217 16:06:14.496436 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n"] Feb 17 16:06:15 crc kubenswrapper[4829]: I0217 16:06:15.024757 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerStarted","Data":"dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0"} Feb 17 16:06:15 crc kubenswrapper[4829]: I0217 16:06:15.025145 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerStarted","Data":"eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384"} Feb 17 16:06:16 crc kubenswrapper[4829]: I0217 16:06:16.031403 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0" exitCode=0 Feb 17 16:06:16 crc kubenswrapper[4829]: I0217 16:06:16.031441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"dd2b3d23f71818f8482c01e06d8d3f041b3b1cd0157e2ecf18f56e5b8c026bf0"} Feb 17 16:06:18 crc kubenswrapper[4829]: I0217 16:06:18.056983 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="3b44369213f31f496419e5b7daa056d8091242c791a342d2f9f9c30abd0445e8" exitCode=0 Feb 17 16:06:18 crc kubenswrapper[4829]: I0217 16:06:18.057074 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"3b44369213f31f496419e5b7daa056d8091242c791a342d2f9f9c30abd0445e8"} Feb 17 16:06:19 crc kubenswrapper[4829]: I0217 16:06:19.066185 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerID="829c2e7f2c989ba6ce504343e24bc2ccb57c7281d5dbce073b8332223ef12d4a" exitCode=0 Feb 17 16:06:19 crc kubenswrapper[4829]: I0217 16:06:19.066293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"829c2e7f2c989ba6ce504343e24bc2ccb57c7281d5dbce073b8332223ef12d4a"} Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.367623 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.438838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.438924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.439053 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") pod \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\" (UID: \"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460\") " Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.441451 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle" (OuterVolumeSpecName: "bundle") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.451001 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util" (OuterVolumeSpecName: "util") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.461900 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82" (OuterVolumeSpecName: "kube-api-access-vgv82") pod "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" (UID: "a1ffb98f-3b96-4b10-9f6b-7fa5b840d460"). InnerVolumeSpecName "kube-api-access-vgv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.540629 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.540960 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:20 crc kubenswrapper[4829]: I0217 16:06:20.541090 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgv82\" (UniqueName: \"kubernetes.io/projected/a1ffb98f-3b96-4b10-9f6b-7fa5b840d460-kube-api-access-vgv82\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083336 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" event={"ID":"a1ffb98f-3b96-4b10-9f6b-7fa5b840d460","Type":"ContainerDied","Data":"eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384"} Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083748 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf549d3cfb9f4dbad8f9dcf62d53e2840ef6ec1dba57d743662d86cbbe07384" Feb 17 16:06:21 crc kubenswrapper[4829]: I0217 16:06:21.083431 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.829623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830416 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830432 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830460 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="util" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830468 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="util" Feb 17 16:06:31 crc kubenswrapper[4829]: E0217 16:06:31.830480 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="pull" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.830489 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="pull" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.831137 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ffb98f-3b96-4b10-9f6b-7fa5b840d460" containerName="extract" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.831656 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.834752 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.835043 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.835662 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-sg987" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.847220 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.899661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.949643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.950370 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.951946 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nks7v" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.952475 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.962845 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.963649 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.973149 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:31 crc kubenswrapper[4829]: I0217 16:06:31.979172 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001324 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001393 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.001450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.045898 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vln\" (UniqueName: \"kubernetes.io/projected/edb49e50-f230-48c5-b2e5-fe59a3ae73fa-kube-api-access-r4vln\") pod \"obo-prometheus-operator-68bc856cb9-cwcb6\" (UID: \"edb49e50-f230-48c5-b2e5-fe59a3ae73fa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102758 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102812 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102838 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.102862 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106919 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.106997 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/54e12496-0dd9-43a5-accb-e17546b7b715-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-6q6r7\" (UID: \"54e12496-0dd9-43a5-accb-e17546b7b715\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.119968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3ae1cd0-485d-4d83-8601-79d0c99bf9e8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6bb447465-vsf4q\" (UID: \"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.163725 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.163898 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.164850 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.166908 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.168020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-8gbgz" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.204274 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.204337 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.231392 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.267500 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.281386 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.307430 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.307491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.321442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3431d3-b6f2-4658-b45c-c428b77e98df-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.355211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqq8t\" (UniqueName: \"kubernetes.io/projected/9d3431d3-b6f2-4658-b45c-c428b77e98df-kube-api-access-xqq8t\") pod \"observability-operator-59bdc8b94-9xj96\" (UID: \"9d3431d3-b6f2-4658-b45c-c428b77e98df\") " pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.395692 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.396662 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.398532 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-msgzl" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.409651 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.410118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.410173 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516423 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd120281-015e-45a4-b1ae-f868b2326499-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.516603 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.529203 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.545606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xcl2\" (UniqueName: \"kubernetes.io/projected/dd120281-015e-45a4-b1ae-f868b2326499-kube-api-access-4xcl2\") pod \"perses-operator-5bf474d74f-f6t4s\" (UID: \"dd120281-015e-45a4-b1ae-f868b2326499\") " pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.674161 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.725013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.734409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.803188 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7"] Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.852320 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9xj96"] Feb 17 16:06:32 crc kubenswrapper[4829]: W0217 16:06:32.858332 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3431d3_b6f2_4658_b45c_c428b77e98df.slice/crio-93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3 WatchSource:0}: Error finding container 93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3: Status 404 returned error can't find the container with id 93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3 Feb 17 16:06:32 crc kubenswrapper[4829]: I0217 16:06:32.942380 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f6t4s"] Feb 17 16:06:32 crc kubenswrapper[4829]: W0217 16:06:32.946147 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd120281_015e_45a4_b1ae_f868b2326499.slice/crio-d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033 WatchSource:0}: Error finding container d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033: Status 404 returned error can't find the container with id d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033 Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.154256 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" event={"ID":"54e12496-0dd9-43a5-accb-e17546b7b715","Type":"ContainerStarted","Data":"078b55e10f34b0421d9bb8c7a46bff6a31903748728fe58c08c6ebdda7a7aec9"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.155732 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" event={"ID":"dd120281-015e-45a4-b1ae-f868b2326499","Type":"ContainerStarted","Data":"d0b785faa8b7f5fab9abb4879450efcfe28dc875f7305b521315785c0a936033"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.157309 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" event={"ID":"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8","Type":"ContainerStarted","Data":"9ce2b012b069c341f7a7901979a72c3602939b601fcb719b9088dbe5fc844951"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.158596 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" event={"ID":"9d3431d3-b6f2-4658-b45c-c428b77e98df","Type":"ContainerStarted","Data":"93d4a908c9f53a6dc8d6cfd757ba6229e56f607d7813db631e8c8e833102a7b3"} Feb 17 16:06:33 crc kubenswrapper[4829]: I0217 16:06:33.160349 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" event={"ID":"edb49e50-f230-48c5-b2e5-fe59a3ae73fa","Type":"ContainerStarted","Data":"eac20a92dfcfdbc66e320fa2aa5349b93ab0d093380c1bbd953b52ddfbd9e887"} Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.461015 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.469091 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" containerID="cri-o://023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.469991 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" containerID="cri-o://6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470508 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" containerID="cri-o://d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470557 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" containerID="cri-o://f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470643 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" containerID="cri-o://bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470690 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" containerID="cri-o://0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.470724 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" gracePeriod=30 Feb 17 16:06:40 crc kubenswrapper[4829]: I0217 16:06:40.524457 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" containerID="cri-o://eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" gracePeriod=30 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.226147 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovnkube-controller/3.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.228657 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229167 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229448 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229477 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229487 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229496 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" exitCode=0 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229505 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" exitCode=143 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229514 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" exitCode=143 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229551 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229592 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229628 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.229643 4829 scope.go:117] "RemoveContainer" containerID="9fb224be75a1affd04c4444b146efebde6fba1114c13167d2bb0aca056a31ea9" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232455 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232787 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/1.log" Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232814 4829 generic.go:334] "Generic (PLEG): container finished" podID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" exitCode=2 Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.232832 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerDied","Data":"f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27"} Feb 17 16:06:41 crc kubenswrapper[4829]: I0217 16:06:41.233276 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:06:41 crc kubenswrapper[4829]: E0217 16:06:41.233537 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.249634 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250048 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250565 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" exitCode=0 Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250611 4829 generic.go:334] "Generic (PLEG): container finished" podID="fad9f982-deda-446c-8801-dc47104eee62" containerID="0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" exitCode=0 Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250632 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a"} Feb 17 16:06:42 crc kubenswrapper[4829]: I0217 16:06:42.250656 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906"} Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.256919 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.257864 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.258428 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.258460 4829 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.372794 4829 scope.go:117] "RemoveContainer" containerID="bf2c7b1b481315da1b0a39216b69e81653db6c0083c00776078387a8e8ed28a7" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512038 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512530 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.512969 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592524 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqwqs"] Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592743 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592755 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592764 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592770 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592777 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592785 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592795 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592800 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592810 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592817 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592824 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kubecfg-setup" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592830 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kubecfg-setup" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592839 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592844 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592853 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592858 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592868 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592874 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592884 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592889 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592899 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592905 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.592915 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.592920 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593011 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593023 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593029 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="sbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593041 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="kube-rbac-proxy-node" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593049 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="nbdb" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593056 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-acl-logging" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593063 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovn-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593071 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="northd" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593078 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593085 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: E0217 16:06:44.593173 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593179 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593277 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.593460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad9f982-deda-446c-8801-dc47104eee62" containerName="ovnkube-controller" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.594923 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602733 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602790 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602818 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602886 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602928 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602951 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.602981 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603001 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603026 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603038 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603053 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603097 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603098 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603142 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603159 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603175 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") pod \"fad9f982-deda-446c-8801-dc47104eee62\" (UID: \"fad9f982-deda-446c-8801-dc47104eee62\") " Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603250 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603280 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603390 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603471 4829 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603484 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603493 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603501 4829 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.603818 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604807 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log" (OuterVolumeSpecName: "node-log") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604872 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash" (OuterVolumeSpecName: "host-slash") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604894 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.604919 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605208 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605283 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605317 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket" (OuterVolumeSpecName: "log-socket") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605367 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.605431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.617224 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.617912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8" (OuterVolumeSpecName: "kube-api-access-tbqk8") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "kube-api-access-tbqk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.628961 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fad9f982-deda-446c-8801-dc47104eee62" (UID: "fad9f982-deda-446c-8801-dc47104eee62"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.704894 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705191 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705269 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705293 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705343 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705366 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705405 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705422 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705457 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705477 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705510 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705530 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705551 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705602 4829 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705613 4829 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fad9f982-deda-446c-8801-dc47104eee62-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705622 4829 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705630 4829 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705638 4829 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705646 4829 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705655 4829 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705663 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705671 4829 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705680 4829 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705687 4829 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705695 4829 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fad9f982-deda-446c-8801-dc47104eee62-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705703 4829 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705711 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqk8\" (UniqueName: \"kubernetes.io/projected/fad9f982-deda-446c-8801-dc47104eee62-kube-api-access-tbqk8\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705719 4829 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.705738 4829 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fad9f982-deda-446c-8801-dc47104eee62-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807113 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807175 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807195 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807221 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-kubelet\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807343 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807296 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-netns\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807470 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807493 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807629 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807667 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-ovn\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-slash\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-bin\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-cni-netd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807819 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-systemd-units\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807837 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-var-lib-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807856 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807901 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-node-log\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807860 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-run-systemd\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.807943 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-env-overrides\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-etc-openvswitch\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-script-lib\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808069 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-log-socket\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.808498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovnkube-config\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.815192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-ovn-node-metrics-cert\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.831712 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lggb\" (UniqueName: \"kubernetes.io/projected/cc41a532-4c37-401e-b0f0-7a9a0561c2e2-kube-api-access-2lggb\") pod \"ovnkube-node-pqwqs\" (UID: \"cc41a532-4c37-401e-b0f0-7a9a0561c2e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:44 crc kubenswrapper[4829]: I0217 16:06:44.908379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.270644 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" event={"ID":"a3ae1cd0-485d-4d83-8601-79d0c99bf9e8","Type":"ContainerStarted","Data":"991c2b44469b5bcb14e456f6cf46e9e2d49468461be7ee6d7bb5561de2fbfd18"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.272906 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.274178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" event={"ID":"9d3431d3-b6f2-4658-b45c-c428b77e98df","Type":"ContainerStarted","Data":"3e3add12b9755ba83c31f6e709eac8c433f3a9d98ad67548f3a8233b50097f31"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.275468 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.276920 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.277138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" event={"ID":"edb49e50-f230-48c5-b2e5-fe59a3ae73fa","Type":"ContainerStarted","Data":"e4c4e834ef0b512da93ec7bfdec8d4cf293811857e0539c1e67503bf6fadb078"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.282203 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-acl-logging/0.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283062 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjd7r_fad9f982-deda-446c-8801-dc47104eee62/ovn-controller/0.log" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" event={"ID":"fad9f982-deda-446c-8801-dc47104eee62","Type":"ContainerDied","Data":"24d57c0da47dc7c1d3efad56150e9d7bcc709a048845a893acefdc17ba6fe78e"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.283848 4829 scope.go:117] "RemoveContainer" containerID="eccba414ce53a3060635572177d90ad05a0edea27e4f05f6f1994636d21e3fd6" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.284066 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjd7r" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.299366 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" event={"ID":"54e12496-0dd9-43a5-accb-e17546b7b715","Type":"ContainerStarted","Data":"08a2b1c068659d94358546c700431d82b1043a4c29696ba3e5bf716c7d527abe"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.301320 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" event={"ID":"dd120281-015e-45a4-b1ae-f868b2326499","Type":"ContainerStarted","Data":"770d17b85d06ec85ba48c749bf75d8f4cae79d4912c88d4b379bfb2dc96cb041"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.301926 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309357 4829 generic.go:334] "Generic (PLEG): container finished" podID="cc41a532-4c37-401e-b0f0-7a9a0561c2e2" containerID="9b7d1b0a6d48da78994667522e51713fca0cf71d5805e72d8583c4e1896889eb" exitCode=0 Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309399 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerDied","Data":"9b7d1b0a6d48da78994667522e51713fca0cf71d5805e72d8583c4e1896889eb"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.309424 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"8542198e8b90f9ae5798217628d88c91623dd3376cd976c3ed467635691ddfea"} Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.326342 4829 scope.go:117] "RemoveContainer" containerID="d34ef9fbe19794889d4cc662583776425da8f13bb31a47ba53adda64d07b6584" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.340507 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-vsf4q" podStartSLOduration=2.771807485 podStartE2EDuration="14.340493128s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.761417389 +0000 UTC m=+705.178435367" lastFinishedPulling="2026-02-17 16:06:44.330103002 +0000 UTC m=+716.747121010" observedRunningTime="2026-02-17 16:06:45.299461405 +0000 UTC m=+717.716479393" watchObservedRunningTime="2026-02-17 16:06:45.340493128 +0000 UTC m=+717.757511106" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.356153 4829 scope.go:117] "RemoveContainer" containerID="f0e827e7f9a818a8ed3e6d9c0a93837ed47b58180624fc877849c19f375a63a1" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.365163 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwcb6" podStartSLOduration=2.732809726 podStartE2EDuration="14.365148669s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.70227081 +0000 UTC m=+705.119288788" lastFinishedPulling="2026-02-17 16:06:44.334609733 +0000 UTC m=+716.751627731" observedRunningTime="2026-02-17 16:06:45.341415192 +0000 UTC m=+717.758433170" watchObservedRunningTime="2026-02-17 16:06:45.365148669 +0000 UTC m=+717.782166647" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.366360 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9xj96" podStartSLOduration=1.824645285 podStartE2EDuration="13.366354432s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.861686232 +0000 UTC m=+705.278704220" lastFinishedPulling="2026-02-17 16:06:44.403395389 +0000 UTC m=+716.820413367" observedRunningTime="2026-02-17 16:06:45.364506952 +0000 UTC m=+717.781524930" watchObservedRunningTime="2026-02-17 16:06:45.366354432 +0000 UTC m=+717.783372410" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.385795 4829 scope.go:117] "RemoveContainer" containerID="6ed2c7840a2d4e155bfdd72d606518ae765f1170ea30cedcd40b94cc3c58807c" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.416001 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6bb447465-6q6r7" podStartSLOduration=2.898269771 podStartE2EDuration="14.415983555s" podCreationTimestamp="2026-02-17 16:06:31 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.813629141 +0000 UTC m=+705.230647119" lastFinishedPulling="2026-02-17 16:06:44.331342905 +0000 UTC m=+716.748360903" observedRunningTime="2026-02-17 16:06:45.394465446 +0000 UTC m=+717.811483424" watchObservedRunningTime="2026-02-17 16:06:45.415983555 +0000 UTC m=+717.833001533" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.419242 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" podStartSLOduration=1.967421652 podStartE2EDuration="13.419234851s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:32.948509778 +0000 UTC m=+705.365527756" lastFinishedPulling="2026-02-17 16:06:44.400322947 +0000 UTC m=+716.817340955" observedRunningTime="2026-02-17 16:06:45.417763492 +0000 UTC m=+717.834781470" watchObservedRunningTime="2026-02-17 16:06:45.419234851 +0000 UTC m=+717.836252829" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.419772 4829 scope.go:117] "RemoveContainer" containerID="41040337b35aa8ee370ce4062ac03b1ab149531e77458b429ba39000552ad57a" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.442465 4829 scope.go:117] "RemoveContainer" containerID="0ee537c316c205fb343a79c14e0e0e3b959321a8619f943779bed6fd7d5d7906" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.459981 4829 scope.go:117] "RemoveContainer" containerID="bea01172ef2fd7ed6aa1cc8bd017460e3517779576e824819db94061c058a5d6" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.471551 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.475550 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjd7r"] Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.486942 4829 scope.go:117] "RemoveContainer" containerID="023786116a728d73e03303cfac9ad2e1332e16079c5ee2058a498563c14b169f" Feb 17 16:06:45 crc kubenswrapper[4829]: I0217 16:06:45.504202 4829 scope.go:117] "RemoveContainer" containerID="562255d0aa68de84b9c4e4341e6f01ac93b5ebf94a36b267fef8f439c4afdb12" Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.292218 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad9f982-deda-446c-8801-dc47104eee62" path="/var/lib/kubelet/pods/fad9f982-deda-446c-8801-dc47104eee62/volumes" Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"2933b4b2a67f0926a4b76845ddfccb6bf3be42388e49f3149c39d974d79139b4"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"8d8c324e5545ecdb1cc09ba574f13e01a6aa0d5e4437af370035aa9c359e47ba"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"fede21686976fe43f1c05763c4613aca57a319ba2e3136c771d5046fa3406dc3"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"931f244b081ab0711d7116ba493110ab103b7a5985e891f5a2c5124005fc8b1c"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"f8683f4d44df22585fb5bff9a5c7f727b2e3d88a992da739341edfb5b0a5505c"} Feb 17 16:06:46 crc kubenswrapper[4829]: I0217 16:06:46.317680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"666effb79e14960df18e0db22fb60aefafa035131d995e6009543129c45dd79a"} Feb 17 16:06:48 crc kubenswrapper[4829]: I0217 16:06:48.330232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"7f92fde773d72d3a44606cfba5a805e9a25ca7e2e6c4bb537adb93aa8860137e"} Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.389933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" event={"ID":"cc41a532-4c37-401e-b0f0-7a9a0561c2e2","Type":"ContainerStarted","Data":"fc5d8228d7c9c17201b7ac8435917189bdf200bcb184e9e854ce9202b731b25b"} Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.390826 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.390944 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.391026 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.418636 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" podStartSLOduration=7.418617885 podStartE2EDuration="7.418617885s" podCreationTimestamp="2026-02-17 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:51.416623331 +0000 UTC m=+723.833641309" watchObservedRunningTime="2026-02-17 16:06:51.418617885 +0000 UTC m=+723.835635863" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.425294 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:51 crc kubenswrapper[4829]: I0217 16:06:51.427117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.425111 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.425469 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.577300 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.578003 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.586041 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.586760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587322 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hzdpq" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587451 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.587535 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.600691 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pm9m5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.613602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.618178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.628910 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.630246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.633224 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.634471 4829 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-96c9z" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.716236 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.716320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.729894 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-f6t4s" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817328 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.817459 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.852547 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9l9g\" (UniqueName: \"kubernetes.io/projected/90365502-e574-4c31-b97b-ca69aac75648-kube-api-access-s9l9g\") pod \"cert-manager-cainjector-cf98fcc89-29pr5\" (UID: \"90365502-e574-4c31-b97b-ca69aac75648\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.853098 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdvg\" (UniqueName: \"kubernetes.io/projected/476f8c4d-b180-40c8-b5a7-120565b0789f-kube-api-access-8kdvg\") pod \"cert-manager-858654f9db-mf5jl\" (UID: \"476f8c4d-b180-40c8-b5a7-120565b0789f\") " pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.900622 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.907710 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.918330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.934914 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.934969 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.935005 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.935050 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(1cee1062625eddce489f11681a86ca3c15b0ef7ed5294a5aaf836e61fdd26ea0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.936934 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q6nd\" (UniqueName: \"kubernetes.io/projected/dc500c7f-2cf7-447f-ae9e-f22211c1d4ad-kube-api-access-6q6nd\") pod \"cert-manager-webhook-687f57d79b-rzvp5\" (UID: \"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.937982 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938023 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938043 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.938085 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(7d4aa7653d24fca6654e8c7dccdc961ee0bcf5dcf078fe415d4a4e7307e22cc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mf5jl" podUID="476f8c4d-b180-40c8-b5a7-120565b0789f" Feb 17 16:06:52 crc kubenswrapper[4829]: I0217 16:06:52.949707 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971768 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971826 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971852 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:52 crc kubenswrapper[4829]: E0217 16:06:52.971891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(cd041f586ee2ff9ad78077f140f6d4b2d68e762764b41287b825064acec70a65): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.279351 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.279682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nhlmt_openshift-multus(88e25bc5-0b59-4edf-a8f6-1a5a026155c4)\"" pod="openshift-multus/multus-nhlmt" podUID="88e25bc5-0b59-4edf-a8f6-1a5a026155c4" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401521 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401550 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.401661 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402065 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402132 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: I0217 16:06:53.402516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482055 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482160 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482247 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.482326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(080944957e7106ef44baa570e584736458b95341c117f4d824b3a2ad7047cf16): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493289 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493363 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493394 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.493447 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(ab477bc83e5564a2d65ac4a013c3eccba469e933205036edd659e4fec221e07b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509266 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509353 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509380 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:06:53 crc kubenswrapper[4829]: E0217 16:06:53.509420 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-mf5jl_cert-manager(476f8c4d-b180-40c8-b5a7-120565b0789f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-mf5jl_cert-manager_476f8c4d-b180-40c8-b5a7-120565b0789f_0(cfb3305483a6b29547fbc3c19b988b85026399906c489a99fea9a7fbfc8d3ee3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-mf5jl" podUID="476f8c4d-b180-40c8-b5a7-120565b0789f" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.278921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.279840 4829 scope.go:117] "RemoveContainer" containerID="f942e28636b72df44e43c6f231da859a17c15fa7d7d2fcd113e167d92107fb27" Feb 17 16:07:05 crc kubenswrapper[4829]: I0217 16:07:05.280200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329709 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329769 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329793 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:05 crc kubenswrapper[4829]: E0217 16:07:05.329846 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-29pr5_cert-manager(90365502-e574-4c31-b97b-ca69aac75648)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-29pr5_cert-manager_90365502-e574-4c31-b97b-ca69aac75648_0(0054e9a7d7e3be750fb3738518d452ea9247b5b5477cb014105356471c35138e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podUID="90365502-e574-4c31-b97b-ca69aac75648" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.279285 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.280377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320065 4829 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320119 4829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320141 4829 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:06 crc kubenswrapper[4829]: E0217 16:07:06.320187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-rzvp5_cert-manager(dc500c7f-2cf7-447f-ae9e-f22211c1d4ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-rzvp5_cert-manager_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad_0(be7c88c321e3c41993d359e324f3c58116dbd50674f35d69677814e52e81bc8f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podUID="dc500c7f-2cf7-447f-ae9e-f22211c1d4ad" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.510324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nhlmt_88e25bc5-0b59-4edf-a8f6-1a5a026155c4/kube-multus/2.log" Feb 17 16:07:06 crc kubenswrapper[4829]: I0217 16:07:06.510405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nhlmt" event={"ID":"88e25bc5-0b59-4edf-a8f6-1a5a026155c4","Type":"ContainerStarted","Data":"aa56853b3602137d47ca0ceae3dde453e9a6fb88133dbeed0156c70be560f295"} Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.279222 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.282636 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mf5jl" Feb 17 16:07:08 crc kubenswrapper[4829]: I0217 16:07:08.761804 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mf5jl"] Feb 17 16:07:08 crc kubenswrapper[4829]: W0217 16:07:08.770286 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod476f8c4d_b180_40c8_b5a7_120565b0789f.slice/crio-aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6 WatchSource:0}: Error finding container aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6: Status 404 returned error can't find the container with id aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6 Feb 17 16:07:09 crc kubenswrapper[4829]: I0217 16:07:09.534282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mf5jl" event={"ID":"476f8c4d-b180-40c8-b5a7-120565b0789f","Type":"ContainerStarted","Data":"aae8b1f69433207c10048ff69aac0f3407a5e7f31a9c2c3489ae83508cdd4dd6"} Feb 17 16:07:12 crc kubenswrapper[4829]: I0217 16:07:12.576179 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mf5jl" event={"ID":"476f8c4d-b180-40c8-b5a7-120565b0789f","Type":"ContainerStarted","Data":"3d364fd6c9a540e6fd7527ed8aede93c02efce3014ec5d5ad823e6323548e75f"} Feb 17 16:07:12 crc kubenswrapper[4829]: I0217 16:07:12.611683 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mf5jl" podStartSLOduration=17.810412364 podStartE2EDuration="20.611646809s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:08.772169902 +0000 UTC m=+741.189187890" lastFinishedPulling="2026-02-17 16:07:11.573404317 +0000 UTC m=+743.990422335" observedRunningTime="2026-02-17 16:07:12.600031098 +0000 UTC m=+745.017049116" watchObservedRunningTime="2026-02-17 16:07:12.611646809 +0000 UTC m=+745.028664827" Feb 17 16:07:14 crc kubenswrapper[4829]: I0217 16:07:14.949293 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqwqs" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.280978 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.282313 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:17 crc kubenswrapper[4829]: I0217 16:07:17.933352 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rzvp5"] Feb 17 16:07:17 crc kubenswrapper[4829]: W0217 16:07:17.944397 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc500c7f_2cf7_447f_ae9e_f22211c1d4ad.slice/crio-1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb WatchSource:0}: Error finding container 1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb: Status 404 returned error can't find the container with id 1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb Feb 17 16:07:18 crc kubenswrapper[4829]: I0217 16:07:18.624664 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" event={"ID":"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad","Type":"ContainerStarted","Data":"1790e0eed61eaf59e90138a3d771500258aafc452aab23645e8916fc2ffb3eeb"} Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.279088 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.280267 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" Feb 17 16:07:19 crc kubenswrapper[4829]: I0217 16:07:19.898553 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-29pr5"] Feb 17 16:07:19 crc kubenswrapper[4829]: W0217 16:07:19.905194 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90365502_e574_4c31_b97b_ca69aac75648.slice/crio-a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3 WatchSource:0}: Error finding container a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3: Status 404 returned error can't find the container with id a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3 Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.646607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" event={"ID":"dc500c7f-2cf7-447f-ae9e-f22211c1d4ad","Type":"ContainerStarted","Data":"f8cc6aa588d9e36a57087bba44fd8090b84e3ed8ed53846188cd7138fc3fa49f"} Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.646934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.648031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" event={"ID":"90365502-e574-4c31-b97b-ca69aac75648","Type":"ContainerStarted","Data":"a4c07f221fbbf5d1cd4ac66f56c3af5c358f5a10d1025babfa19d7e621c5f3d3"} Feb 17 16:07:20 crc kubenswrapper[4829]: I0217 16:07:20.673258 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" podStartSLOduration=26.949873111 podStartE2EDuration="28.673234058s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:17.948805755 +0000 UTC m=+750.365823773" lastFinishedPulling="2026-02-17 16:07:19.672166742 +0000 UTC m=+752.089184720" observedRunningTime="2026-02-17 16:07:20.667647978 +0000 UTC m=+753.084665956" watchObservedRunningTime="2026-02-17 16:07:20.673234058 +0000 UTC m=+753.090252046" Feb 17 16:07:21 crc kubenswrapper[4829]: I0217 16:07:21.658767 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" event={"ID":"90365502-e574-4c31-b97b-ca69aac75648","Type":"ContainerStarted","Data":"436d578c65f80a3ec7cb12d6b5f155d2c05a8f7c1bdfce7fb5151b5ec7f7617b"} Feb 17 16:07:21 crc kubenswrapper[4829]: I0217 16:07:21.685052 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-29pr5" podStartSLOduration=28.307310293 podStartE2EDuration="29.68502722s" podCreationTimestamp="2026-02-17 16:06:52 +0000 UTC" firstStartedPulling="2026-02-17 16:07:19.907833939 +0000 UTC m=+752.324851927" lastFinishedPulling="2026-02-17 16:07:21.285550876 +0000 UTC m=+753.702568854" observedRunningTime="2026-02-17 16:07:21.679166063 +0000 UTC m=+754.096184081" watchObservedRunningTime="2026-02-17 16:07:21.68502722 +0000 UTC m=+754.102045238" Feb 17 16:07:22 crc kubenswrapper[4829]: I0217 16:07:22.425236 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:07:22 crc kubenswrapper[4829]: I0217 16:07:22.425325 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:07:24 crc kubenswrapper[4829]: I0217 16:07:24.722999 4829 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:07:27 crc kubenswrapper[4829]: I0217 16:07:27.953337 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rzvp5" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.184355 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.192075 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.204490 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277297 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.277758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379543 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.379903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.380244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.380807 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.408682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"redhat-marketplace-pwbz6\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:44 crc kubenswrapper[4829]: I0217 16:07:44.549031 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.031532 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:45 crc kubenswrapper[4829]: W0217 16:07:45.039775 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba WatchSource:0}: Error finding container c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba: Status 404 returned error can't find the container with id c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862426 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" exitCode=0 Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862513 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390"} Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.862555 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerStarted","Data":"c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba"} Feb 17 16:07:45 crc kubenswrapper[4829]: I0217 16:07:45.864875 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:07:46 crc kubenswrapper[4829]: I0217 16:07:46.871498 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" exitCode=0 Feb 17 16:07:46 crc kubenswrapper[4829]: I0217 16:07:46.871816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a"} Feb 17 16:07:46 crc kubenswrapper[4829]: E0217 16:07:46.960175 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5962bde_d309_4dbe_b4ce_750af54dec5c.slice/crio-conmon-b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:07:47 crc kubenswrapper[4829]: I0217 16:07:47.885718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerStarted","Data":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} Feb 17 16:07:47 crc kubenswrapper[4829]: I0217 16:07:47.905327 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pwbz6" podStartSLOduration=2.427836329 podStartE2EDuration="3.905302584s" podCreationTimestamp="2026-02-17 16:07:44 +0000 UTC" firstStartedPulling="2026-02-17 16:07:45.864653091 +0000 UTC m=+778.281671069" lastFinishedPulling="2026-02-17 16:07:47.342119306 +0000 UTC m=+779.759137324" observedRunningTime="2026-02-17 16:07:47.901417389 +0000 UTC m=+780.318435467" watchObservedRunningTime="2026-02-17 16:07:47.905302584 +0000 UTC m=+780.322320602" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.544616 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.546465 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.570026 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680375 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.680421 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781547 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781678 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.781712 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.782152 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.782220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.809058 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"redhat-operators-qhlg9\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:50 crc kubenswrapper[4829]: I0217 16:07:50.869900 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.342737 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.913161 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" exitCode=0 Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.913200 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828"} Feb 17 16:07:51 crc kubenswrapper[4829]: I0217 16:07:51.914326 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"747ea8fe9b8d0099815a9e67eb706998bb857d51b0eefecdf7d0c1e5e5268d24"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424411 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424920 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.424993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.426343 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.426490 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" gracePeriod=600 Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.928878 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934405 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" exitCode=0 Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934444 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934466 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} Feb 17 16:07:52 crc kubenswrapper[4829]: I0217 16:07:52.934484 4829 scope.go:117] "RemoveContainer" containerID="eeb52be39c27a863d0eb9fedbfac6f412e709f3d647076f5f2fa62b39387400e" Feb 17 16:07:53 crc kubenswrapper[4829]: I0217 16:07:53.945599 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" exitCode=0 Feb 17 16:07:53 crc kubenswrapper[4829]: I0217 16:07:53.945676 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.550258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.550809 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.580761 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.581905 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.583458 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.605138 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.630511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641738 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.641920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.742912 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.742988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743049 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743538 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.743629 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.763880 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.785260 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.787010 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.799522 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844269 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.844323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.903532 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.945821 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.945946 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.946852 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.965309 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.980835 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerStarted","Data":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} Feb 17 16:07:54 crc kubenswrapper[4829]: I0217 16:07:54.997029 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhlg9" podStartSLOduration=2.57332609 podStartE2EDuration="4.997011285s" podCreationTimestamp="2026-02-17 16:07:50 +0000 UTC" firstStartedPulling="2026-02-17 16:07:51.915223836 +0000 UTC m=+784.332241814" lastFinishedPulling="2026-02-17 16:07:54.338909031 +0000 UTC m=+786.755927009" observedRunningTime="2026-02-17 16:07:54.995962824 +0000 UTC m=+787.412980812" watchObservedRunningTime="2026-02-17 16:07:54.997011285 +0000 UTC m=+787.414029263" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.031437 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.112121 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.366389 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz"] Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.567842 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj"] Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986131 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="500e93f756bbd9dce2c1f230bbf359410a2ab2cb5aef71a9d300ea9b7abaf7a0" exitCode=0 Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986215 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"500e93f756bbd9dce2c1f230bbf359410a2ab2cb5aef71a9d300ea9b7abaf7a0"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.986243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerStarted","Data":"dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.989263 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="9c281425d585c4c09d0ce6e1170686f431088e4723cc45cf5b532ef15c09aa65" exitCode=0 Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.990168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"9c281425d585c4c09d0ce6e1170686f431088e4723cc45cf5b532ef15c09aa65"} Feb 17 16:07:55 crc kubenswrapper[4829]: I0217 16:07:55.990230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerStarted","Data":"8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7"} Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.004728 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="504f76f252aec139780ab0b0ab9e059fdf322750f3db1ce2bbd16fe4ade1509d" exitCode=0 Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.004799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"504f76f252aec139780ab0b0ab9e059fdf322750f3db1ce2bbd16fe4ade1509d"} Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.007039 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="4f311cee486863896f0d0b561244b9e78487341a09d2e005b828973516f9eccd" exitCode=0 Feb 17 16:07:58 crc kubenswrapper[4829]: I0217 16:07:58.007075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"4f311cee486863896f0d0b561244b9e78487341a09d2e005b828973516f9eccd"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.028522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"0da3d4ed97185dc0b4579d3d6a08b9bef01d516df9feba317d0a4cec41ef831f"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.028479 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerID="0da3d4ed97185dc0b4579d3d6a08b9bef01d516df9feba317d0a4cec41ef831f" exitCode=0 Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.038095 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerID="483ea08a6d40128fb85cce6a45b7d0089e6572f8293d7ba9fd96f371ecf39af4" exitCode=0 Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.038169 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"483ea08a6d40128fb85cce6a45b7d0089e6572f8293d7ba9fd96f371ecf39af4"} Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.539164 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:07:59 crc kubenswrapper[4829]: I0217 16:07:59.539498 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pwbz6" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" containerID="cri-o://60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" gracePeriod=2 Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.464416 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.466098 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571460 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571593 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571653 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571692 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") pod \"c5571b57-495c-43ce-88ed-ec6f10e58839\" (UID: \"c5571b57-495c-43ce-88ed-ec6f10e58839\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.571730 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") pod \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\" (UID: \"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.572665 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle" (OuterVolumeSpecName: "bundle") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.573528 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle" (OuterVolumeSpecName: "bundle") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.579757 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x" (OuterVolumeSpecName: "kube-api-access-2ll8x") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "kube-api-access-2ll8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.580886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx" (OuterVolumeSpecName: "kube-api-access-jm9dx") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "kube-api-access-jm9dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.584533 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util" (OuterVolumeSpecName: "util") pod "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" (UID: "ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.593042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util" (OuterVolumeSpecName: "util") pod "c5571b57-495c-43ce-88ed-ec6f10e58839" (UID: "c5571b57-495c-43ce-88ed-ec6f10e58839"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.671859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672865 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672885 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672894 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ll8x\" (UniqueName: \"kubernetes.io/projected/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-kube-api-access-2ll8x\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672907 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5571b57-495c-43ce-88ed-ec6f10e58839-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672915 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm9dx\" (UniqueName: \"kubernetes.io/projected/c5571b57-495c-43ce-88ed-ec6f10e58839-kube-api-access-jm9dx\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.672923 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.773499 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.774798 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.774996 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") pod \"c5962bde-d309-4dbe-b4ce-750af54dec5c\" (UID: \"c5962bde-d309-4dbe-b4ce-750af54dec5c\") " Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.776487 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities" (OuterVolumeSpecName: "utilities") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.782562 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6" (OuterVolumeSpecName: "kube-api-access-mcdj6") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "kube-api-access-mcdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.811824 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5962bde-d309-4dbe-b4ce-750af54dec5c" (UID: "c5962bde-d309-4dbe-b4ce-750af54dec5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.870144 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.870265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877553 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcdj6\" (UniqueName: \"kubernetes.io/projected/c5962bde-d309-4dbe-b4ce-750af54dec5c-kube-api-access-mcdj6\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877600 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:00 crc kubenswrapper[4829]: I0217 16:08:00.877615 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5962bde-d309-4dbe-b4ce-750af54dec5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056599 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" event={"ID":"c5571b57-495c-43ce-88ed-ec6f10e58839","Type":"ContainerDied","Data":"8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056634 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.056642 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8862df7d1a08624cf189efbc536d0e488765ab0258a5dcf0bd92ee71d4b2e7" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058486 4829 generic.go:334] "Generic (PLEG): container finished" podID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" exitCode=0 Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058590 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pwbz6" event={"ID":"c5962bde-d309-4dbe-b4ce-750af54dec5c","Type":"ContainerDied","Data":"c5cb0dd1445515215eb7b368acbc44a81aa61926a2485ced068d036df612d7ba"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058614 4829 scope.go:117] "RemoveContainer" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.058738 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pwbz6" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063687 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj" event={"ID":"ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b","Type":"ContainerDied","Data":"dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8"} Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.063793 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc810f2bb87d8a79d0fbd3bdfb5dc2cbc30f536ebec44f556e7bb91d278447a8" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.082404 4829 scope.go:117] "RemoveContainer" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.092204 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.096717 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pwbz6"] Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.115137 4829 scope.go:117] "RemoveContainer" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.139967 4829 scope.go:117] "RemoveContainer" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.140505 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": container with ID starting with 60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4 not found: ID does not exist" containerID="60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.140559 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4"} err="failed to get container status \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": rpc error: code = NotFound desc = could not find container \"60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4\": container with ID starting with 60650f4f055dcc2de95440493017682927a9c2ff037398db12fa1a9e8db763d4 not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.140608 4829 scope.go:117] "RemoveContainer" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.141002 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": container with ID starting with b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a not found: ID does not exist" containerID="b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141026 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a"} err="failed to get container status \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": rpc error: code = NotFound desc = could not find container \"b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a\": container with ID starting with b1a265da9d1c9558c16f30fb873ffc6642a1726bea0ce45f19d6c27e416c0f7a not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141045 4829 scope.go:117] "RemoveContainer" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: E0217 16:08:01.141278 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": container with ID starting with aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390 not found: ID does not exist" containerID="aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.141301 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390"} err="failed to get container status \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": rpc error: code = NotFound desc = could not find container \"aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390\": container with ID starting with aa0f6f73fdb01d3a016d70b3735056de427b13f1b28c2fb52677144c6cda4390 not found: ID does not exist" Feb 17 16:08:01 crc kubenswrapper[4829]: I0217 16:08:01.920977 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhlg9" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" probeResult="failure" output=< Feb 17 16:08:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:08:01 crc kubenswrapper[4829]: > Feb 17 16:08:02 crc kubenswrapper[4829]: I0217 16:08:02.296012 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" path="/var/lib/kubelet/pods/c5962bde-d309-4dbe-b4ce-750af54dec5c/volumes" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144318 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144840 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144851 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144862 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-utilities" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144868 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-utilities" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144876 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-content" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144882 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="extract-content" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144892 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144897 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144908 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144913 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144923 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144928 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="pull" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144938 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144943 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="util" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144951 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144957 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: E0217 16:08:08.144970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.144976 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145086 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5571b57-495c-43ce-88ed-ec6f10e58839" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145102 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5962bde-d309-4dbe-b4ce-750af54dec5c" containerName="registry-server" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.145112 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b" containerName="extract" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.146082 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.163682 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.302327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403666 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.403687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.404353 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.404808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.448491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"community-operators-tsjr9\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.461068 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:08 crc kubenswrapper[4829]: I0217 16:08:08.773753 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.126977 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843" exitCode=0 Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.127127 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843"} Feb 17 16:08:09 crc kubenswrapper[4829]: I0217 16:08:09.127409 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"71392bb15fe30737dcc91e4557eb2e9ef23b12f6bed7911efc5cbd153b7360e4"} Feb 17 16:08:10 crc kubenswrapper[4829]: I0217 16:08:10.134340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab"} Feb 17 16:08:10 crc kubenswrapper[4829]: I0217 16:08:10.946750 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.026817 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.142759 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab" exitCode=0 Feb 17 16:08:11 crc kubenswrapper[4829]: I0217 16:08:11.142839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab"} Feb 17 16:08:12 crc kubenswrapper[4829]: I0217 16:08:12.154219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerStarted","Data":"52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5"} Feb 17 16:08:12 crc kubenswrapper[4829]: I0217 16:08:12.181995 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsjr9" podStartSLOduration=1.698732711 podStartE2EDuration="4.18197081s" podCreationTimestamp="2026-02-17 16:08:08 +0000 UTC" firstStartedPulling="2026-02-17 16:08:09.128665864 +0000 UTC m=+801.545683852" lastFinishedPulling="2026-02-17 16:08:11.611903963 +0000 UTC m=+804.028921951" observedRunningTime="2026-02-17 16:08:12.174792907 +0000 UTC m=+804.591810885" watchObservedRunningTime="2026-02-17 16:08:12.18197081 +0000 UTC m=+804.598988798" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.077755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.078953 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081280 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081699 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.081747 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-246v8" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082552 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082776 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.082915 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.098534 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169530 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169611 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169647 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.169745 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271373 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271513 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.271539 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.272169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d845044e-d849-405d-a6ef-c2d76a5abba6-manager-config\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.278008 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-apiservice-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.278374 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.291359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d845044e-d849-405d-a6ef-c2d76a5abba6-webhook-cert\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.291862 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwkj\" (UniqueName: \"kubernetes.io/projected/d845044e-d849-405d-a6ef-c2d76a5abba6-kube-api-access-mzwkj\") pod \"loki-operator-controller-manager-5c6bf5887b-ljvq2\" (UID: \"d845044e-d849-405d-a6ef-c2d76a5abba6\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.397844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.527526 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.528538 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531023 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531186 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.531352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-ndsvz" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.536950 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.676279 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.777376 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.794861 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p66c\" (UniqueName: \"kubernetes.io/projected/54232488-a26b-4bdf-8b89-381241b92b54-kube-api-access-4p66c\") pod \"cluster-logging-operator-c769fd969-csdvg\" (UID: \"54232488-a26b-4bdf-8b89-381241b92b54\") " pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:13 crc kubenswrapper[4829]: W0217 16:08:13.845950 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd845044e_d849_405d_a6ef_c2d76a5abba6.slice/crio-c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8 WatchSource:0}: Error finding container c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8: Status 404 returned error can't find the container with id c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8 Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.845993 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2"] Feb 17 16:08:13 crc kubenswrapper[4829]: I0217 16:08:13.850722 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.046299 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-csdvg"] Feb 17 16:08:14 crc kubenswrapper[4829]: W0217 16:08:14.056737 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54232488_a26b_4bdf_8b89_381241b92b54.slice/crio-2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5 WatchSource:0}: Error finding container 2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5: Status 404 returned error can't find the container with id 2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5 Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.167105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"c7c2e76edc0ee5c9766f8c71b055bd33d229cd3ed3b0148927dba6aa2c9a13a8"} Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.168282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" event={"ID":"54232488-a26b-4bdf-8b89-381241b92b54","Type":"ContainerStarted","Data":"2b957859485f1bbf01236c6da5eee8e8eb2460713c70d44747150993415d9eb5"} Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.334825 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.335080 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhlg9" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" containerID="cri-o://57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" gracePeriod=2 Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.717894 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894226 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.894374 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") pod \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\" (UID: \"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc\") " Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.897345 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities" (OuterVolumeSpecName: "utilities") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.917718 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk" (OuterVolumeSpecName: "kube-api-access-ms6rk") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "kube-api-access-ms6rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.998283 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms6rk\" (UniqueName: \"kubernetes.io/projected/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-kube-api-access-ms6rk\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4829]: I0217 16:08:14.998314 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.028420 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" (UID: "b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.100446 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209319 4829 generic.go:334] "Generic (PLEG): container finished" podID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" exitCode=0 Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209389 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhlg9" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209379 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhlg9" event={"ID":"b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc","Type":"ContainerDied","Data":"747ea8fe9b8d0099815a9e67eb706998bb857d51b0eefecdf7d0c1e5e5268d24"} Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.209483 4829 scope.go:117] "RemoveContainer" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.228182 4829 scope.go:117] "RemoveContainer" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.245078 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.249224 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhlg9"] Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.269470 4829 scope.go:117] "RemoveContainer" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310234 4829 scope.go:117] "RemoveContainer" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.310794 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": container with ID starting with 57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18 not found: ID does not exist" containerID="57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310855 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18"} err="failed to get container status \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": rpc error: code = NotFound desc = could not find container \"57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18\": container with ID starting with 57f58dda9e4e76f338de8175910d551b7d1d32edfe3c98c872b626e41a652e18 not found: ID does not exist" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.310889 4829 scope.go:117] "RemoveContainer" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.311272 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": container with ID starting with 02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c not found: ID does not exist" containerID="02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.311337 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c"} err="failed to get container status \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": rpc error: code = NotFound desc = could not find container \"02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c\": container with ID starting with 02810ec45cd39862a3a282e4badf355be4a5feb62dccbb1391fe737dfc49d51c not found: ID does not exist" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.311374 4829 scope.go:117] "RemoveContainer" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: E0217 16:08:15.313299 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": container with ID starting with d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828 not found: ID does not exist" containerID="d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828" Feb 17 16:08:15 crc kubenswrapper[4829]: I0217 16:08:15.313324 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828"} err="failed to get container status \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": rpc error: code = NotFound desc = could not find container \"d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828\": container with ID starting with d39a8164f3c3952ec95c816ae865b3be5495b7d986387dbfe33559485a6ac828 not found: ID does not exist" Feb 17 16:08:16 crc kubenswrapper[4829]: I0217 16:08:16.291153 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" path="/var/lib/kubelet/pods/b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc/volumes" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.461450 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.461729 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:18 crc kubenswrapper[4829]: I0217 16:08:18.504909 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:19 crc kubenswrapper[4829]: I0217 16:08:19.309151 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.268651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"ba72c41efe419b3422abc7bde3c04790e2e59a48d3430534b20b45fca82ff6b9"} Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.272482 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" event={"ID":"54232488-a26b-4bdf-8b89-381241b92b54","Type":"ContainerStarted","Data":"15b040fb3e7899376ade6063137f6935d3e43b40adbf5e55b1eed53dae4b925a"} Feb 17 16:08:22 crc kubenswrapper[4829]: I0217 16:08:22.301903 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-csdvg" podStartSLOduration=1.838106467 podStartE2EDuration="9.301878285s" podCreationTimestamp="2026-02-17 16:08:13 +0000 UTC" firstStartedPulling="2026-02-17 16:08:14.059259629 +0000 UTC m=+806.476277607" lastFinishedPulling="2026-02-17 16:08:21.523031437 +0000 UTC m=+813.940049425" observedRunningTime="2026-02-17 16:08:22.301054712 +0000 UTC m=+814.718072700" watchObservedRunningTime="2026-02-17 16:08:22.301878285 +0000 UTC m=+814.718896293" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.138031 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.138525 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tsjr9" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" containerID="cri-o://52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" gracePeriod=2 Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.283192 4829 generic.go:334] "Generic (PLEG): container finished" podID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerID="52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" exitCode=0 Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.283264 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5"} Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.679953 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.862744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863132 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863182 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") pod \"ca2bc313-c759-4b68-8a79-91cfb9059e60\" (UID: \"ca2bc313-c759-4b68-8a79-91cfb9059e60\") " Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.863911 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities" (OuterVolumeSpecName: "utilities") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.869565 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc" (OuterVolumeSpecName: "kube-api-access-25xzc") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "kube-api-access-25xzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.926452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2bc313-c759-4b68-8a79-91cfb9059e60" (UID: "ca2bc313-c759-4b68-8a79-91cfb9059e60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964817 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964852 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2bc313-c759-4b68-8a79-91cfb9059e60-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:23 crc kubenswrapper[4829]: I0217 16:08:23.964895 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25xzc\" (UniqueName: \"kubernetes.io/projected/ca2bc313-c759-4b68-8a79-91cfb9059e60-kube-api-access-25xzc\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.298708 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsjr9" event={"ID":"ca2bc313-c759-4b68-8a79-91cfb9059e60","Type":"ContainerDied","Data":"71392bb15fe30737dcc91e4557eb2e9ef23b12f6bed7911efc5cbd153b7360e4"} Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.299680 4829 scope.go:117] "RemoveContainer" containerID="52500ca46673fd47a4ddb3794299c87316d56e976c994a052bcaa73bb8d87ad5" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.298792 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsjr9" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.328682 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.332078 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tsjr9"] Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.338989 4829 scope.go:117] "RemoveContainer" containerID="eaedad5d9284ead75cb5883a4a3df5f8600931c06a1db2dd0e3526abc7e9c9ab" Feb 17 16:08:24 crc kubenswrapper[4829]: I0217 16:08:24.361151 4829 scope.go:117] "RemoveContainer" containerID="87d68f028fb934ca8b87bb1143147582e78e93e4c14d2e8670dbb451d5f72843" Feb 17 16:08:26 crc kubenswrapper[4829]: I0217 16:08:26.291851 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" path="/var/lib/kubelet/pods/ca2bc313-c759-4b68-8a79-91cfb9059e60/volumes" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.359255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" event={"ID":"d845044e-d849-405d-a6ef-c2d76a5abba6","Type":"ContainerStarted","Data":"39f4699e9f021d5f434136341eedaca0c0c1c1d7408ab84504a01535453bfcaa"} Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.359878 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.363352 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" Feb 17 16:08:30 crc kubenswrapper[4829]: I0217 16:08:30.402155 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5c6bf5887b-ljvq2" podStartSLOduration=1.906124768 podStartE2EDuration="17.402120706s" podCreationTimestamp="2026-02-17 16:08:13 +0000 UTC" firstStartedPulling="2026-02-17 16:08:13.849281409 +0000 UTC m=+806.266299387" lastFinishedPulling="2026-02-17 16:08:29.345277347 +0000 UTC m=+821.762295325" observedRunningTime="2026-02-17 16:08:30.392493774 +0000 UTC m=+822.809511822" watchObservedRunningTime="2026-02-17 16:08:30.402120706 +0000 UTC m=+822.819138724" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000001 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000619 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000638 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000659 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000669 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000692 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000703 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000736 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000746 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000760 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000768 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-content" Feb 17 16:08:34 crc kubenswrapper[4829]: E0217 16:08:34.000780 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000788 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="extract-utilities" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000930 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2bc313-c759-4b68-8a79-91cfb9059e60" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.000953 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b357df3f-9a38-47e0-b6ad-6e6f08c1a1dc" containerName="registry-server" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.001544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.003598 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.003648 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.006836 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.132096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.132634 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.234266 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.234467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.238241 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.238313 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78a1f1f59404e4c8f45632a04b4073b58fcf919b0e2b57c1f6ffde01f2db77fb/globalmount\"" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.260123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvpg\" (UniqueName: \"kubernetes.io/projected/f947362f-df3e-462c-af01-d31c8e524633-kube-api-access-nbvpg\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.277114 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-571848b2-4208-40fb-9f8f-c8b0b2266b77\") pod \"minio\" (UID: \"f947362f-df3e-462c-af01-d31c8e524633\") " pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.319540 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:08:34 crc kubenswrapper[4829]: I0217 16:08:34.757794 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:08:35 crc kubenswrapper[4829]: I0217 16:08:35.393690 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f947362f-df3e-462c-af01-d31c8e524633","Type":"ContainerStarted","Data":"a9919eaaf5ba6065bd7b230fbc8591757b05b256a18ae97cba58d18f27c588df"} Feb 17 16:08:38 crc kubenswrapper[4829]: I0217 16:08:38.415290 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f947362f-df3e-462c-af01-d31c8e524633","Type":"ContainerStarted","Data":"194a09ccdc4146f67ea826888bb30a1fba2326145655f42c49a864fa6b00f429"} Feb 17 16:08:38 crc kubenswrapper[4829]: I0217 16:08:38.432680 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.26252234 podStartE2EDuration="7.432665125s" podCreationTimestamp="2026-02-17 16:08:31 +0000 UTC" firstStartedPulling="2026-02-17 16:08:34.769110156 +0000 UTC m=+827.186128134" lastFinishedPulling="2026-02-17 16:08:37.939252911 +0000 UTC m=+830.356270919" observedRunningTime="2026-02-17 16:08:38.431146924 +0000 UTC m=+830.848164902" watchObservedRunningTime="2026-02-17 16:08:38.432665125 +0000 UTC m=+830.849683093" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.951097 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.954157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958160 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958258 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958670 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-bjxjt" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.958858 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.959259 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 17 16:08:43 crc kubenswrapper[4829]: I0217 16:08:43.966695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.073871 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.073969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074012 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074028 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.074060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.101779 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.102676 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.104934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.105365 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.106383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.114272 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.174957 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175043 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175585 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175685 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.175883 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.176252 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-config\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.176529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.183512 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.191355 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.192422 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199732 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/3e78e45a-c46f-4cfd-a487-56fad3cb0649-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199801 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.199930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.209084 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.210311 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdh8p\" (UniqueName: \"kubernetes.io/projected/3e78e45a-c46f-4cfd-a487-56fad3cb0649-kube-api-access-vdh8p\") pod \"logging-loki-distributor-5d5548c9f5-knrkx\" (UID: \"3e78e45a-c46f-4cfd-a487-56fad3cb0649\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.260103 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.265462 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273312 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273464 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-rccgh" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273616 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273681 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.273740 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.274189 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285129 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285232 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.285292 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.289735 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.291037 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.293105 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76340faf-b2e5-461e-9172-a03eee715830-config\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.301097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.301282 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/76340faf-b2e5-461e-9172-a03eee715830-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.316359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdthf\" (UniqueName: \"kubernetes.io/projected/76340faf-b2e5-461e-9172-a03eee715830-kube-api-access-tdthf\") pod \"logging-loki-querier-76bf7b6d45-w7bl4\" (UID: \"76340faf-b2e5-461e-9172-a03eee715830\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.339789 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341228 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341360 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.341329 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387212 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387325 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387445 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387553 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387693 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.387720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.397415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.401631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90856a62-8a7f-479c-af7e-a95b8292618a-config\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.403119 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.422457 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.427251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/90856a62-8a7f-479c-af7e-a95b8292618a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.428502 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2h9x\" (UniqueName: \"kubernetes.io/projected/90856a62-8a7f-479c-af7e-a95b8292618a-kube-api-access-c2h9x\") pod \"logging-loki-query-frontend-6d6859c548-7v4zj\" (UID: \"90856a62-8a7f-479c-af7e-a95b8292618a\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488674 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.488980 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489018 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489033 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489178 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489256 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489280 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489307 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.489333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.490850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.491061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-rbac\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.491243 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.492211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.507618 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.507774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tls-secret\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.510125 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-tenants\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.512382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797wv\" (UniqueName: \"kubernetes.io/projected/38a2308f-5d3c-4dac-b105-3d42a6b7bdd1-kube-api-access-797wv\") pod \"logging-loki-gateway-6d6859d459-8xxq9\" (UID: \"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.547107 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591417 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591538 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591565 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591601 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591619 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.591635 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.593223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.597498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.598051 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.598301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-rbac\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.599408 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/52de54a3-9f80-412c-a925-25541914e2b0-lokistack-gateway\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.601485 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tenants\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.604946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/52de54a3-9f80-412c-a925-25541914e2b0-tls-secret\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.625027 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmb4\" (UniqueName: \"kubernetes.io/projected/52de54a3-9f80-412c-a925-25541914e2b0-kube-api-access-xlmb4\") pod \"logging-loki-gateway-6d6859d459-6lhvz\" (UID: \"52de54a3-9f80-412c-a925-25541914e2b0\") " pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.659365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4"] Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.663333 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:44 crc kubenswrapper[4829]: W0217 16:08:44.665844 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76340faf_b2e5_461e_9172_a03eee715830.slice/crio-51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4 WatchSource:0}: Error finding container 51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4: Status 404 returned error can't find the container with id 51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4 Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.680626 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:44 crc kubenswrapper[4829]: I0217 16:08:44.746126 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx"] Feb 17 16:08:44 crc kubenswrapper[4829]: W0217 16:08:44.769168 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e78e45a_c46f_4cfd_a487_56fad3cb0649.slice/crio-67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc WatchSource:0}: Error finding container 67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc: Status 404 returned error can't find the container with id 67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.066766 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj"] Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.070715 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90856a62_8a7f_479c_af7e_a95b8292618a.slice/crio-03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362 WatchSource:0}: Error finding container 03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362: Status 404 returned error can't find the container with id 03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362 Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.079942 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.081004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.083043 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.083728 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.088623 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.139240 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-8xxq9"] Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.143520 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a2308f_5d3c_4dac_b105_3d42a6b7bdd1.slice/crio-45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47 WatchSource:0}: Error finding container 45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47: Status 404 returned error can't find the container with id 45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47 Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.150865 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.151944 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.153783 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.153882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.163810 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.185103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6d6859d459-6lhvz"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.201882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.201948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202043 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.202212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: W0217 16:08:45.209549 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52de54a3_9f80_412c_a925_25541914e2b0.slice/crio-1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a WatchSource:0}: Error finding container 1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a: Status 404 returned error can't find the container with id 1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.227637 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.228473 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.230219 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.230406 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.241705 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.303936 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.303984 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304038 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304061 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304092 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304183 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304214 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304229 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.304940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305115 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305166 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305207 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305260 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305297 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.305690 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c5b31c-f45c-4a04-afc1-251ef93e471a-config\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.307916 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.307950 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e861a5096f5f0d1287f9a88513df974a6a9c92d5d1b4a4bae97166a7b3febbf7/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.310292 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.310380 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3a40b83791c7a77d3eb558f51ade9de37416943ff6cc471855c64f0b52b50f1/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.312735 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.313812 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.315767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a7c5b31c-f45c-4a04-afc1-251ef93e471a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.322395 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rmt\" (UniqueName: \"kubernetes.io/projected/a7c5b31c-f45c-4a04-afc1-251ef93e471a-kube-api-access-22rmt\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.338161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a6fcf607-9fa9-4bc8-9121-796745026d8f\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.341995 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd148747-aa33-44d4-bc84-90a4d805ceeb\") pod \"logging-loki-ingester-0\" (UID: \"a7c5b31c-f45c-4a04-afc1-251ef93e471a\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406148 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406199 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406269 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406286 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406311 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406340 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406357 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406391 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406411 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.406460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.408136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.408977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410436 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.410860 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.411024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.411224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-config\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.412140 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.412256 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.413989 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.414027 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f022bf64a59c1be903ef93f415580ba9af908757cb0725ae917d6880abb7ea9/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.415023 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7bf847ac-1d33-4bad-8882-4661d8f33da8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.415772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.418507 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.418546 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b43741fcaf0f6728a264b5d8e8846f094e17347790ad69ae2ff64917e7ad50d4/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.428911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghln\" (UniqueName: \"kubernetes.io/projected/7bf847ac-1d33-4bad-8882-4661d8f33da8-kube-api-access-tghln\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.431883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76jq\" (UniqueName: \"kubernetes.io/projected/c7dd4bfd-add5-4b6b-a938-5e8ae8433d10-kube-api-access-v76jq\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.466028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"45d8913b43e69ccab9d2671966bf627c41d093b5b4d972ad914405dd35343f47"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.466834 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" event={"ID":"90856a62-8a7f-479c-af7e-a95b8292618a","Type":"ContainerStarted","Data":"03286098332d4ed8451e81377e92f471473bf877c3e4b267e5032eaaedfbc362"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.468268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" event={"ID":"76340faf-b2e5-461e-9172-a03eee715830","Type":"ContainerStarted","Data":"51a846d3e75204bf49d8af017bbe17498aa6614ac028e15af84b20d77fe813a4"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.469203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"1100ed7f6bcfa23e60bd29acecaa9f81487515f58214f2bdc441931cadc13b5a"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.469508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddb99c0-93a1-413f-9349-fa97424b39dd\") pod \"logging-loki-compactor-0\" (UID: \"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.470151 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" event={"ID":"3e78e45a-c46f-4cfd-a487-56fad3cb0649","Type":"ContainerStarted","Data":"67b40754376e9ce36f853ffb8dfda9942e029357b6202b348a47cc826c0a31dc"} Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.477568 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ecb51fed-40e5-49b8-bc9c-3d4981cc0aeb\") pod \"logging-loki-index-gateway-0\" (UID: \"7bf847ac-1d33-4bad-8882-4661d8f33da8\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.558213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.767334 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:45 crc kubenswrapper[4829]: I0217 16:08:45.971760 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.082163 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.206683 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 16:08:46 crc kubenswrapper[4829]: W0217 16:08:46.218956 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7dd4bfd_add5_4b6b_a938_5e8ae8433d10.slice/crio-460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd WatchSource:0}: Error finding container 460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd: Status 404 returned error can't find the container with id 460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.480153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a7c5b31c-f45c-4a04-afc1-251ef93e471a","Type":"ContainerStarted","Data":"b7c05feab7d9fbcd578a2ece1545d8ce879d457d9bb03dda0bbbf9a7e4d6dc25"} Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.481109 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10","Type":"ContainerStarted","Data":"460ef13f163e1a06e9acd6503ee1cc64b57bae12354ae7218f9860165fb7f9cd"} Feb 17 16:08:46 crc kubenswrapper[4829]: I0217 16:08:46.481864 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7bf847ac-1d33-4bad-8882-4661d8f33da8","Type":"ContainerStarted","Data":"e87ee7b3b50d4607829cd4eae44e1099d6244218b95876dda8d89c2567638c5d"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.506241 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" event={"ID":"76340faf-b2e5-461e-9172-a03eee715830","Type":"ContainerStarted","Data":"811697a9b1ff759b6e30e692f2c95294982457094cc342cd770f03e257b912a8"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.506602 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.509902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a7c5b31c-f45c-4a04-afc1-251ef93e471a","Type":"ContainerStarted","Data":"270b53e2496f7577b11d1051da265f3a02f93a80c0f7d4954d147a6445e2144c"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.510139 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.513093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"98598ab6d4962b1587cf43b25e0655a4e20c8080617afb70d1b3b0b7ce2b163b"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.517283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" event={"ID":"3e78e45a-c46f-4cfd-a487-56fad3cb0649","Type":"ContainerStarted","Data":"71442ed0ebf28802dd3e6974191297917b0d9883339122decf98f1c28113a84e"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.517398 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.519718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"aca1dd42c199facbfa267d6584d4ded803b90be84ebfb21ab914da6b8fedea34"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.521925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" event={"ID":"90856a62-8a7f-479c-af7e-a95b8292618a","Type":"ContainerStarted","Data":"a06ca0982f53531643f81645359aef99245f632e7d1218b8f8dbcfd662282709"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.522111 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.524316 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"c7dd4bfd-add5-4b6b-a938-5e8ae8433d10","Type":"ContainerStarted","Data":"abe53713cc41341bb80e28237643197db45973014c7bbe9bff1453219a49142f"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.524454 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.526362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7bf847ac-1d33-4bad-8882-4661d8f33da8","Type":"ContainerStarted","Data":"2f24fe25640262f273ce96f1c91bff695933d0e3a5cbea23562b81090aac3db3"} Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.526639 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.537874 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" podStartSLOduration=1.942560452 podStartE2EDuration="5.53785339s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:44.671273767 +0000 UTC m=+837.088291745" lastFinishedPulling="2026-02-17 16:08:48.266566705 +0000 UTC m=+840.683584683" observedRunningTime="2026-02-17 16:08:49.534025036 +0000 UTC m=+841.951043054" watchObservedRunningTime="2026-02-17 16:08:49.53785339 +0000 UTC m=+841.954871408" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.579740 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.292940693 podStartE2EDuration="5.579720386s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.980543692 +0000 UTC m=+838.397561660" lastFinishedPulling="2026-02-17 16:08:48.267323365 +0000 UTC m=+840.684341353" observedRunningTime="2026-02-17 16:08:49.572198511 +0000 UTC m=+841.989216499" watchObservedRunningTime="2026-02-17 16:08:49.579720386 +0000 UTC m=+841.996738374" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.601531 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" podStartSLOduration=3.113261803 podStartE2EDuration="6.601508976s" podCreationTimestamp="2026-02-17 16:08:43 +0000 UTC" firstStartedPulling="2026-02-17 16:08:44.772094962 +0000 UTC m=+837.189112940" lastFinishedPulling="2026-02-17 16:08:48.260342105 +0000 UTC m=+840.677360113" observedRunningTime="2026-02-17 16:08:49.599996465 +0000 UTC m=+842.017014473" watchObservedRunningTime="2026-02-17 16:08:49.601508976 +0000 UTC m=+842.018526964" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.636514 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.584196933 podStartE2EDuration="5.636475855s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:46.221226091 +0000 UTC m=+838.638244059" lastFinishedPulling="2026-02-17 16:08:48.273505003 +0000 UTC m=+840.690522981" observedRunningTime="2026-02-17 16:08:49.629244579 +0000 UTC m=+842.046262567" watchObservedRunningTime="2026-02-17 16:08:49.636475855 +0000 UTC m=+842.053493843" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.654963 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.471711832 podStartE2EDuration="5.654941706s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:46.0847771 +0000 UTC m=+838.501795078" lastFinishedPulling="2026-02-17 16:08:48.268006954 +0000 UTC m=+840.685024952" observedRunningTime="2026-02-17 16:08:49.6524979 +0000 UTC m=+842.069515888" watchObservedRunningTime="2026-02-17 16:08:49.654941706 +0000 UTC m=+842.071959684" Feb 17 16:08:49 crc kubenswrapper[4829]: I0217 16:08:49.681481 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" podStartSLOduration=2.5538390939999998 podStartE2EDuration="5.681454315s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.074040213 +0000 UTC m=+837.491058201" lastFinishedPulling="2026-02-17 16:08:48.201655444 +0000 UTC m=+840.618673422" observedRunningTime="2026-02-17 16:08:49.674733223 +0000 UTC m=+842.091751221" watchObservedRunningTime="2026-02-17 16:08:49.681454315 +0000 UTC m=+842.098472293" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.542972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" event={"ID":"38a2308f-5d3c-4dac-b105-3d42a6b7bdd1","Type":"ContainerStarted","Data":"115bbf832da28ba6694e9713df6612e5c8a5717206df7fc0da8f43d7adb59986"} Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.543563 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.543662 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.544966 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" event={"ID":"52de54a3-9f80-412c-a925-25541914e2b0","Type":"ContainerStarted","Data":"5c418dd2d77a5af464833ba222d0f29363a17df4c659fece282e9f95c09fa60b"} Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.545202 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.553498 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.560038 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.565110 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.578211 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6d6859d459-8xxq9" podStartSLOduration=2.205503596 podStartE2EDuration="7.578182657s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.145754598 +0000 UTC m=+837.562772576" lastFinishedPulling="2026-02-17 16:08:50.518433659 +0000 UTC m=+842.935451637" observedRunningTime="2026-02-17 16:08:51.567612211 +0000 UTC m=+843.984630199" watchObservedRunningTime="2026-02-17 16:08:51.578182657 +0000 UTC m=+843.995200675" Feb 17 16:08:51 crc kubenswrapper[4829]: I0217 16:08:51.597457 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" podStartSLOduration=2.313823273 podStartE2EDuration="7.597436489s" podCreationTimestamp="2026-02-17 16:08:44 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.211686376 +0000 UTC m=+837.628704354" lastFinishedPulling="2026-02-17 16:08:50.495299592 +0000 UTC m=+842.912317570" observedRunningTime="2026-02-17 16:08:51.592356771 +0000 UTC m=+844.009374789" watchObservedRunningTime="2026-02-17 16:08:51.597436489 +0000 UTC m=+844.014454467" Feb 17 16:08:52 crc kubenswrapper[4829]: I0217 16:08:52.554895 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:08:52 crc kubenswrapper[4829]: I0217 16:08:52.567912 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6d6859d459-6lhvz" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.293854 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-knrkx" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.429888 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-w7bl4" Feb 17 16:09:04 crc kubenswrapper[4829]: I0217 16:09:04.556693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-7v4zj" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.420755 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.421301 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.567857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 16:09:05 crc kubenswrapper[4829]: I0217 16:09:05.778294 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 17 16:09:15 crc kubenswrapper[4829]: I0217 16:09:15.418390 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 16:09:15 crc kubenswrapper[4829]: I0217 16:09:15.420053 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:25 crc kubenswrapper[4829]: I0217 16:09:25.417023 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:09:25 crc kubenswrapper[4829]: I0217 16:09:25.417783 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:35 crc kubenswrapper[4829]: I0217 16:09:35.415255 4829 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 16:09:35 crc kubenswrapper[4829]: I0217 16:09:35.415628 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a7c5b31c-f45c-4a04-afc1-251ef93e471a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:09:45 crc kubenswrapper[4829]: I0217 16:09:45.418862 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.751853 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.757074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.766337 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.887597 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.888052 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.888221 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990718 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.990959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.991262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-utilities\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:46 crc kubenswrapper[4829]: I0217 16:09:46.991299 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11288751-f708-4745-96fa-625be709d265-catalog-content\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.032805 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8g9\" (UniqueName: \"kubernetes.io/projected/11288751-f708-4745-96fa-625be709d265-kube-api-access-ft8g9\") pod \"certified-operators-xgnph\" (UID: \"11288751-f708-4745-96fa-625be709d265\") " pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.079363 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:47 crc kubenswrapper[4829]: I0217 16:09:47.583379 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020875 4829 generic.go:334] "Generic (PLEG): container finished" podID="11288751-f708-4745-96fa-625be709d265" containerID="bc6744f09138f5aa87c11faadd70077d0a62ba785aae5ae1e92283729ce3768c" exitCode=0 Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020941 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerDied","Data":"bc6744f09138f5aa87c11faadd70077d0a62ba785aae5ae1e92283729ce3768c"} Feb 17 16:09:48 crc kubenswrapper[4829]: I0217 16:09:48.020985 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerStarted","Data":"402cead6fc56aaa5adc0f7ecbd14bf2fe1010dfdb7732a80d93f22e151d3d5d5"} Feb 17 16:09:52 crc kubenswrapper[4829]: I0217 16:09:52.424942 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:52 crc kubenswrapper[4829]: I0217 16:09:52.425234 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:53 crc kubenswrapper[4829]: I0217 16:09:53.059040 4829 generic.go:334] "Generic (PLEG): container finished" podID="11288751-f708-4745-96fa-625be709d265" containerID="f0f1933635205a797290236ef1808afed82485d095a4bc966936f5165644cd68" exitCode=0 Feb 17 16:09:53 crc kubenswrapper[4829]: I0217 16:09:53.059096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerDied","Data":"f0f1933635205a797290236ef1808afed82485d095a4bc966936f5165644cd68"} Feb 17 16:09:54 crc kubenswrapper[4829]: I0217 16:09:54.070600 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgnph" event={"ID":"11288751-f708-4745-96fa-625be709d265","Type":"ContainerStarted","Data":"0a5d2598b77ae8e825ac5d8cf1c1b53ecf7814c96e5f7aaf259f43223f8d6a78"} Feb 17 16:09:54 crc kubenswrapper[4829]: I0217 16:09:54.091762 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgnph" podStartSLOduration=2.6425731580000003 podStartE2EDuration="8.091739638s" podCreationTimestamp="2026-02-17 16:09:46 +0000 UTC" firstStartedPulling="2026-02-17 16:09:48.023861089 +0000 UTC m=+900.440879107" lastFinishedPulling="2026-02-17 16:09:53.473027609 +0000 UTC m=+905.890045587" observedRunningTime="2026-02-17 16:09:54.084867112 +0000 UTC m=+906.501885160" watchObservedRunningTime="2026-02-17 16:09:54.091739638 +0000 UTC m=+906.508757656" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.079619 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.080011 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:09:57 crc kubenswrapper[4829]: I0217 16:09:57.146777 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.922864 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.924119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.929693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930061 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-72v7n" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930126 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930180 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.930413 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.933918 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:10:02 crc kubenswrapper[4829]: I0217 16:10:02.936703 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.082240 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:03 crc kubenswrapper[4829]: E0217 16:10:03.083015 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-pr2kc metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-mrvfp" podUID="ee08f929-2d75-418a-ba47-8f64355f622d" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107367 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107447 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107496 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107593 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107659 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.107700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.141925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.151102 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.208939 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.208998 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209022 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209057 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209086 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.209213 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210691 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.210967 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.211543 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.214550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.215209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.215922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.216911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.229815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.266957 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"collector-mrvfp\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " pod="openshift-logging/collector-mrvfp" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411478 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411778 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411799 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411865 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411942 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.411995 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412050 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412076 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412099 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") pod \"ee08f929-2d75-418a-ba47-8f64355f622d\" (UID: \"ee08f929-2d75-418a-ba47-8f64355f622d\") " Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412225 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412478 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir" (OuterVolumeSpecName: "datadir") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412927 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.412940 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config" (OuterVolumeSpecName: "config") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.413043 4829 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.413078 4829 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.415438 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token" (OuterVolumeSpecName: "collector-token") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.415908 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token" (OuterVolumeSpecName: "sa-token") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.416035 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.420760 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp" (OuterVolumeSpecName: "tmp") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.421351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc" (OuterVolumeSpecName: "kube-api-access-pr2kc") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "kube-api-access-pr2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.421737 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics" (OuterVolumeSpecName: "metrics") pod "ee08f929-2d75-418a-ba47-8f64355f622d" (UID: "ee08f929-2d75-418a-ba47-8f64355f622d"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514293 4829 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514594 4829 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ee08f929-2d75-418a-ba47-8f64355f622d-datadir\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514665 4829 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ee08f929-2d75-418a-ba47-8f64355f622d-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514734 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514791 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee08f929-2d75-418a-ba47-8f64355f622d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514850 4829 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.514911 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr2kc\" (UniqueName: \"kubernetes.io/projected/ee08f929-2d75-418a-ba47-8f64355f622d-kube-api-access-pr2kc\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.515048 4829 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:03 crc kubenswrapper[4829]: I0217 16:10:03.515109 4829 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ee08f929-2d75-418a-ba47-8f64355f622d-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.149153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mrvfp" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.216102 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.225303 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-mrvfp"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.239280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.240888 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.242560 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.251341 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.252868 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-72v7n" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.253782 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.253970 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.254092 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.262874 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.287421 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee08f929-2d75-418a-ba47-8f64355f622d" path="/var/lib/kubelet/pods/ee08f929-2d75-418a-ba47-8f64355f622d/volumes" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429513 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429584 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429664 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429696 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429723 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429848 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429902 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.429946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.430005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531624 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531745 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531816 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531898 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.531981 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532015 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532100 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.532714 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/768f24d9-7e75-4b78-a2a7-10cdfd579577-datadir\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533173 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config-openshift-service-cacrt\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533425 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-entrypoint\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-config\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.533789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/768f24d9-7e75-4b78-a2a7-10cdfd579577-trusted-ca\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.537524 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/768f24d9-7e75-4b78-a2a7-10cdfd579577-tmp\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.538557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-syslog-receiver\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.538651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-collector-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.540059 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/768f24d9-7e75-4b78-a2a7-10cdfd579577-metrics\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.556180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzclg\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-kube-api-access-xzclg\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.561976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/768f24d9-7e75-4b78-a2a7-10cdfd579577-sa-token\") pod \"collector-j7l9k\" (UID: \"768f24d9-7e75-4b78-a2a7-10cdfd579577\") " pod="openshift-logging/collector-j7l9k" Feb 17 16:10:04 crc kubenswrapper[4829]: I0217 16:10:04.564140 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-j7l9k" Feb 17 16:10:05 crc kubenswrapper[4829]: I0217 16:10:05.065761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-j7l9k"] Feb 17 16:10:05 crc kubenswrapper[4829]: I0217 16:10:05.159762 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-j7l9k" event={"ID":"768f24d9-7e75-4b78-a2a7-10cdfd579577","Type":"ContainerStarted","Data":"bb7dd5c19deab8329594890322ef7efbc4b543d2f9f2f9ccf829c4d3ec8957e7"} Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.174030 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgnph" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.270809 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgnph"] Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.306043 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.306293 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqfvj" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" containerID="cri-o://2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" gracePeriod=2 Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.697931 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.825840 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.825923 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.826034 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") pod \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\" (UID: \"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3\") " Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.826966 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities" (OuterVolumeSpecName: "utilities") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.833349 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj" (OuterVolumeSpecName: "kube-api-access-fcbhj") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "kube-api-access-fcbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.885530 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" (UID: "92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927914 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927957 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:07 crc kubenswrapper[4829]: I0217 16:10:07.927969 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcbhj\" (UniqueName: \"kubernetes.io/projected/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3-kube-api-access-fcbhj\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.198797 4829 generic.go:334] "Generic (PLEG): container finished" podID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" exitCode=0 Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199058 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqfvj" event={"ID":"92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3","Type":"ContainerDied","Data":"bf86b13da18449629a51340681937919a16230add94f77ec9352bea5db2de7c4"} Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199073 4829 scope.go:117] "RemoveContainer" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.199167 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqfvj" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.225260 4829 scope.go:117] "RemoveContainer" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.231169 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.241015 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqfvj"] Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.288559 4829 scope.go:117] "RemoveContainer" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.290213 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" path="/var/lib/kubelet/pods/92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3/volumes" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336210 4829 scope.go:117] "RemoveContainer" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.336599 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": container with ID starting with 2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06 not found: ID does not exist" containerID="2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336622 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06"} err="failed to get container status \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": rpc error: code = NotFound desc = could not find container \"2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06\": container with ID starting with 2ef367d7e6b8bfbc7ee2809f0b82674045bbbebe923d1d79e66e90cdbd0a0c06 not found: ID does not exist" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.336640 4829 scope.go:117] "RemoveContainer" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.337629 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": container with ID starting with 2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21 not found: ID does not exist" containerID="2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.337652 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21"} err="failed to get container status \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": rpc error: code = NotFound desc = could not find container \"2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21\": container with ID starting with 2bc7688a8f01ba549e6eeefd3c519328995bdd802f840297d5612c986bf57e21 not found: ID does not exist" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.337664 4829 scope.go:117] "RemoveContainer" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: E0217 16:10:08.338124 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": container with ID starting with 2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325 not found: ID does not exist" containerID="2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325" Feb 17 16:10:08 crc kubenswrapper[4829]: I0217 16:10:08.338168 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325"} err="failed to get container status \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": rpc error: code = NotFound desc = could not find container \"2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325\": container with ID starting with 2d417cb3e567e221059678c8dd6c18d2006f1fe2c18730e0c905b009995f8325 not found: ID does not exist" Feb 17 16:10:14 crc kubenswrapper[4829]: I0217 16:10:14.293703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-j7l9k" event={"ID":"768f24d9-7e75-4b78-a2a7-10cdfd579577","Type":"ContainerStarted","Data":"37ad35872a9cc39af81a394d4803d6aa082192a133ee08b01812243e5e65f745"} Feb 17 16:10:14 crc kubenswrapper[4829]: I0217 16:10:14.304104 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-j7l9k" podStartSLOduration=1.663946756 podStartE2EDuration="10.304083816s" podCreationTimestamp="2026-02-17 16:10:04 +0000 UTC" firstStartedPulling="2026-02-17 16:10:05.077531932 +0000 UTC m=+917.494549910" lastFinishedPulling="2026-02-17 16:10:13.717668992 +0000 UTC m=+926.134686970" observedRunningTime="2026-02-17 16:10:14.300184619 +0000 UTC m=+926.717202597" watchObservedRunningTime="2026-02-17 16:10:14.304083816 +0000 UTC m=+926.721101804" Feb 17 16:10:22 crc kubenswrapper[4829]: I0217 16:10:22.425376 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:10:22 crc kubenswrapper[4829]: I0217 16:10:22.425953 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450233 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450879 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-utilities" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450892 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-utilities" Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450904 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-content" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450910 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="extract-content" Feb 17 16:10:44 crc kubenswrapper[4829]: E0217 16:10:44.450920 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.450926 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.451082 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bf9e45-4314-4bab-8fda-e0fbf0e5e2b3" containerName="registry-server" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.452046 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.454276 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.468940 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488692 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488788 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.488808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590709 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.590808 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.591382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.591386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.617684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:44 crc kubenswrapper[4829]: I0217 16:10:44.770965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.080159 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl"] Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515126 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="874a55bc34adca66ed5a7c0d077eab2f9ade225a0e42b28ec2051f629c6eea06" exitCode=0 Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"874a55bc34adca66ed5a7c0d077eab2f9ade225a0e42b28ec2051f629c6eea06"} Feb 17 16:10:45 crc kubenswrapper[4829]: I0217 16:10:45.515192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerStarted","Data":"e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83"} Feb 17 16:10:48 crc kubenswrapper[4829]: I0217 16:10:48.540408 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="9a49718063f82a427a5de708cd484941a8be3c9835d6a16237ffe32ce44354d6" exitCode=0 Feb 17 16:10:48 crc kubenswrapper[4829]: I0217 16:10:48.540525 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"9a49718063f82a427a5de708cd484941a8be3c9835d6a16237ffe32ce44354d6"} Feb 17 16:10:49 crc kubenswrapper[4829]: I0217 16:10:49.551986 4829 generic.go:334] "Generic (PLEG): container finished" podID="2f38714a-d191-4850-8b52-257b43af4a40" containerID="347d214a1f469ad7a36586def45e331e743cf878e189bb10837deda08ea995d7" exitCode=0 Feb 17 16:10:49 crc kubenswrapper[4829]: I0217 16:10:49.552028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"347d214a1f469ad7a36586def45e331e743cf878e189bb10837deda08ea995d7"} Feb 17 16:10:50 crc kubenswrapper[4829]: I0217 16:10:50.905880 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105119 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105233 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105297 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") pod \"2f38714a-d191-4850-8b52-257b43af4a40\" (UID: \"2f38714a-d191-4850-8b52-257b43af4a40\") " Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.105876 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle" (OuterVolumeSpecName: "bundle") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.106221 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.116074 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util" (OuterVolumeSpecName: "util") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.116984 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn" (OuterVolumeSpecName: "kube-api-access-qsdxn") pod "2f38714a-d191-4850-8b52-257b43af4a40" (UID: "2f38714a-d191-4850-8b52-257b43af4a40"). InnerVolumeSpecName "kube-api-access-qsdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.211843 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsdxn\" (UniqueName: \"kubernetes.io/projected/2f38714a-d191-4850-8b52-257b43af4a40-kube-api-access-qsdxn\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.211887 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f38714a-d191-4850-8b52-257b43af4a40-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573351 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" event={"ID":"2f38714a-d191-4850-8b52-257b43af4a40","Type":"ContainerDied","Data":"e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83"} Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573402 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ecef8871642adc8127caff743d2aea511f4b1e5a5fc5d4b059ce5608f6df83" Feb 17 16:10:51 crc kubenswrapper[4829]: I0217 16:10:51.573449 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.424895 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.424991 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.425064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.426125 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.426285 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" gracePeriod=600 Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.590493 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" exitCode=0 Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.590810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b"} Feb 17 16:10:52 crc kubenswrapper[4829]: I0217 16:10:52.591078 4829 scope.go:117] "RemoveContainer" containerID="ebbe575e7f93382897403219c0a5a59bd73ebb281964c2210e071cd8df55c074" Feb 17 16:10:53 crc kubenswrapper[4829]: I0217 16:10:53.602366 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.945637 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946313 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946341 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="util" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946347 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="util" Feb 17 16:10:54 crc kubenswrapper[4829]: E0217 16:10:54.946355 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="pull" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946362 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="pull" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.946481 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f38714a-d191-4850-8b52-257b43af4a40" containerName="extract" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.947106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.949823 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.949970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gp7nj" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.950387 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:10:54 crc kubenswrapper[4829]: I0217 16:10:54.966799 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.073913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.175248 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.201563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4hvg\" (UniqueName: \"kubernetes.io/projected/e597d80c-fb6d-45a3-9b01-4a32a59f07a6-kube-api-access-p4hvg\") pod \"nmstate-operator-694c9596b7-lpfx5\" (UID: \"e597d80c-fb6d-45a3-9b01-4a32a59f07a6\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.279224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" Feb 17 16:10:55 crc kubenswrapper[4829]: I0217 16:10:55.679399 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lpfx5"] Feb 17 16:10:55 crc kubenswrapper[4829]: W0217 16:10:55.680697 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode597d80c_fb6d_45a3_9b01_4a32a59f07a6.slice/crio-b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287 WatchSource:0}: Error finding container b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287: Status 404 returned error can't find the container with id b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287 Feb 17 16:10:56 crc kubenswrapper[4829]: I0217 16:10:56.629153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" event={"ID":"e597d80c-fb6d-45a3-9b01-4a32a59f07a6","Type":"ContainerStarted","Data":"b467f14d4df34d1dacd3c1584c312ba58dc33e76d396407c32f919868b5aa287"} Feb 17 16:10:58 crc kubenswrapper[4829]: I0217 16:10:58.668430 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" event={"ID":"e597d80c-fb6d-45a3-9b01-4a32a59f07a6","Type":"ContainerStarted","Data":"039dbb88fab254603228749cbe5085cc9e2ef51e16d9e59f8315746a75e706b7"} Feb 17 16:10:58 crc kubenswrapper[4829]: I0217 16:10:58.686563 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-lpfx5" podStartSLOduration=2.723690505 podStartE2EDuration="4.686544145s" podCreationTimestamp="2026-02-17 16:10:54 +0000 UTC" firstStartedPulling="2026-02-17 16:10:55.684047405 +0000 UTC m=+968.101065383" lastFinishedPulling="2026-02-17 16:10:57.646901025 +0000 UTC m=+970.063919023" observedRunningTime="2026-02-17 16:10:58.682645519 +0000 UTC m=+971.099663497" watchObservedRunningTime="2026-02-17 16:10:58.686544145 +0000 UTC m=+971.103562133" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.484684 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.486952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.488398 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g6zcq" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.491121 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.492237 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.493546 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.501608 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.520346 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.551426 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-47lp4"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.553074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647499 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647587 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647652 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647741 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.647807 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.652869 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.653876 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.657234 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.657392 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-x5nwp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.672450 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.674847 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750454 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750617 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750635 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.750980 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-dbus-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.751332 4829 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.751380 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair podName:55a7b0a0-24f0-4b6b-82bf-f131f831af3a nodeName:}" failed. No retries permitted until 2026-02-17 16:11:06.251362249 +0000 UTC m=+978.668380227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair") pod "nmstate-webhook-866bcb46dc-v2bww" (UID: "55a7b0a0-24f0-4b6b-82bf-f131f831af3a") : secret "openshift-nmstate-webhook" not found Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.751619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-ovs-socket\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.751648 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-nmstate-lock\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.788533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kz97\" (UniqueName: \"kubernetes.io/projected/20b39811-2839-4b55-a69e-a293416edb22-kube-api-access-2kz97\") pod \"nmstate-metrics-58c85c668d-85cbd\" (UID: \"20b39811-2839-4b55-a69e-a293416edb22\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.802863 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbf2g\" (UniqueName: \"kubernetes.io/projected/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-kube-api-access-wbf2g\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.803010 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtrn\" (UniqueName: \"kubernetes.io/projected/4e62a7c0-ac99-4dd8-a587-58c98adb3a25-kube-api-access-8mtrn\") pod \"nmstate-handler-47lp4\" (UID: \"4e62a7c0-ac99-4dd8-a587-58c98adb3a25\") " pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.813011 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857478 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.857623 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.857785 4829 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 16:11:05 crc kubenswrapper[4829]: E0217 16:11:05.857839 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert podName:df7e3d75-f36c-4258-ae86-6bb72db7c0e4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:06.35782332 +0000 UTC m=+978.774841298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-mchvp" (UID: "df7e3d75-f36c-4258-ae86-6bb72db7c0e4") : secret "plugin-serving-cert" not found Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.859154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.869342 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.910253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649pf\" (UniqueName: \"kubernetes.io/projected/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-kube-api-access-649pf\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.940426 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.943860 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:05 crc kubenswrapper[4829]: I0217 16:11:05.976066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067415 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067471 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067638 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067710 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067728 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.067745 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169236 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169262 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169285 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169366 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.169395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170522 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.170727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.173040 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.173178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.187045 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"console-864565556d-824bj\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.271258 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.274773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/55a7b0a0-24f0-4b6b-82bf-f131f831af3a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-v2bww\" (UID: \"55a7b0a0-24f0-4b6b-82bf-f131f831af3a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.308369 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.349846 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-85cbd"] Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.373206 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.376871 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/df7e3d75-f36c-4258-ae86-6bb72db7c0e4-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-mchvp\" (UID: \"df7e3d75-f36c-4258-ae86-6bb72db7c0e4\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.425229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.575648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.732112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"908d77668dd9f13bf54ca68f6bc92a171a53518d505cbec033eff4cacdd9303d"} Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.734070 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-47lp4" event={"ID":"4e62a7c0-ac99-4dd8-a587-58c98adb3a25","Type":"ContainerStarted","Data":"cad8acfbdb19eee6f9c474f995a0155668bd17c0d5d0ea98b7bb7f5af5a20f25"} Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.805378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.813489 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc453fb9_9d54_4441_bcae_64e34e837dac.slice/crio-1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7 WatchSource:0}: Error finding container 1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7: Status 404 returned error can't find the container with id 1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7 Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.834034 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.841155 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf7e3d75_f36c_4258_ae86_6bb72db7c0e4.slice/crio-afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093 WatchSource:0}: Error finding container afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093: Status 404 returned error can't find the container with id afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093 Feb 17 16:11:06 crc kubenswrapper[4829]: I0217 16:11:06.911188 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww"] Feb 17 16:11:06 crc kubenswrapper[4829]: W0217 16:11:06.915468 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a7b0a0_24f0_4b6b_82bf_f131f831af3a.slice/crio-2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16 WatchSource:0}: Error finding container 2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16: Status 404 returned error can't find the container with id 2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16 Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.754318 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" event={"ID":"df7e3d75-f36c-4258-ae86-6bb72db7c0e4","Type":"ContainerStarted","Data":"afe722e86f464f1dcb7c12c006fc8b8dfbb3ffc573d30a9563ed6c9c0aabc093"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.755975 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerStarted","Data":"76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.756037 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerStarted","Data":"1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.758035 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" event={"ID":"55a7b0a0-24f0-4b6b-82bf-f131f831af3a","Type":"ContainerStarted","Data":"2a11524d9934422d573d6f7d5b4480a7515d5dc4d6144ed248c1cab3eaf9ec16"} Feb 17 16:11:07 crc kubenswrapper[4829]: I0217 16:11:07.777025 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-864565556d-824bj" podStartSLOduration=2.777009385 podStartE2EDuration="2.777009385s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:11:07.773627815 +0000 UTC m=+980.190645793" watchObservedRunningTime="2026-02-17 16:11:07.777009385 +0000 UTC m=+980.194027363" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.774408 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" event={"ID":"55a7b0a0-24f0-4b6b-82bf-f131f831af3a","Type":"ContainerStarted","Data":"01de5783cf50eb53fa7c3d3fd4fb4448a4082b23f3514cafab3f491b4bced204"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.774865 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.777232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-47lp4" event={"ID":"4e62a7c0-ac99-4dd8-a587-58c98adb3a25","Type":"ContainerStarted","Data":"7396e859466a78f066ed44e70b88be1c92bbfc1fb80fadb3b24d6388370c6b94"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.777321 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.778801 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"6edf72e5ac8b699491eb0f520f374a3d61fcaa48fa6b585a0a16b80c72be6ba9"} Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.792987 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" podStartSLOduration=2.883003023 podStartE2EDuration="4.792972773s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.918016197 +0000 UTC m=+979.335034175" lastFinishedPulling="2026-02-17 16:11:08.827985907 +0000 UTC m=+981.245003925" observedRunningTime="2026-02-17 16:11:09.790048535 +0000 UTC m=+982.207066503" watchObservedRunningTime="2026-02-17 16:11:09.792972773 +0000 UTC m=+982.209990751" Feb 17 16:11:09 crc kubenswrapper[4829]: I0217 16:11:09.819244 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-47lp4" podStartSLOduration=1.937279932 podStartE2EDuration="4.819219533s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:05.94435764 +0000 UTC m=+978.361375618" lastFinishedPulling="2026-02-17 16:11:08.826297231 +0000 UTC m=+981.243315219" observedRunningTime="2026-02-17 16:11:09.814313522 +0000 UTC m=+982.231331500" watchObservedRunningTime="2026-02-17 16:11:09.819219533 +0000 UTC m=+982.236237511" Feb 17 16:11:10 crc kubenswrapper[4829]: I0217 16:11:10.789247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" event={"ID":"df7e3d75-f36c-4258-ae86-6bb72db7c0e4","Type":"ContainerStarted","Data":"30d9f08bd040a55f8cb65c9f090bd8a0eafe1566a713ce987b8e0ef5cfd18678"} Feb 17 16:11:10 crc kubenswrapper[4829]: I0217 16:11:10.809488 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-mchvp" podStartSLOduration=2.750207731 podStartE2EDuration="5.809464984s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.843876599 +0000 UTC m=+979.260894577" lastFinishedPulling="2026-02-17 16:11:09.903133832 +0000 UTC m=+982.320151830" observedRunningTime="2026-02-17 16:11:10.801631444 +0000 UTC m=+983.218649452" watchObservedRunningTime="2026-02-17 16:11:10.809464984 +0000 UTC m=+983.226482962" Feb 17 16:11:11 crc kubenswrapper[4829]: I0217 16:11:11.800868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" event={"ID":"20b39811-2839-4b55-a69e-a293416edb22","Type":"ContainerStarted","Data":"577e20ad2933f746b58851298d6006c06b5241e2355d47469f8202e1eb05b0a8"} Feb 17 16:11:11 crc kubenswrapper[4829]: I0217 16:11:11.828056 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-85cbd" podStartSLOduration=1.645992547 podStartE2EDuration="6.828017129s" podCreationTimestamp="2026-02-17 16:11:05 +0000 UTC" firstStartedPulling="2026-02-17 16:11:06.366444156 +0000 UTC m=+978.783462154" lastFinishedPulling="2026-02-17 16:11:11.548468718 +0000 UTC m=+983.965486736" observedRunningTime="2026-02-17 16:11:11.823026786 +0000 UTC m=+984.240044784" watchObservedRunningTime="2026-02-17 16:11:11.828017129 +0000 UTC m=+984.245035117" Feb 17 16:11:15 crc kubenswrapper[4829]: I0217 16:11:15.898211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-47lp4" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.309374 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.309434 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.315770 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.847949 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-864565556d-824bj" Feb 17 16:11:16 crc kubenswrapper[4829]: I0217 16:11:16.914069 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:26 crc kubenswrapper[4829]: I0217 16:11:26.438102 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-v2bww" Feb 17 16:11:41 crc kubenswrapper[4829]: I0217 16:11:41.977635 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-797db4bf78-znlsn" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" containerID="cri-o://bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" gracePeriod=15 Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.527122 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-797db4bf78-znlsn_6fa156f6-505b-4ad3-b8e7-b66291338bc9/console/0.log" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.527447 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.617405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.617951 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618120 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618161 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618246 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.618977 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config" (OuterVolumeSpecName: "console-config") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619168 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") pod \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\" (UID: \"6fa156f6-505b-4ad3-b8e7-b66291338bc9\") " Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619747 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca" (OuterVolumeSpecName: "service-ca") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.619805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620090 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620106 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620119 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.620128 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6fa156f6-505b-4ad3-b8e7-b66291338bc9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.624443 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.633236 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr" (OuterVolumeSpecName: "kube-api-access-9wmkr") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "kube-api-access-9wmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.633509 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6fa156f6-505b-4ad3-b8e7-b66291338bc9" (UID: "6fa156f6-505b-4ad3-b8e7-b66291338bc9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722154 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722384 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmkr\" (UniqueName: \"kubernetes.io/projected/6fa156f6-505b-4ad3-b8e7-b66291338bc9-kube-api-access-9wmkr\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:42 crc kubenswrapper[4829]: I0217 16:11:42.722396 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa156f6-505b-4ad3-b8e7-b66291338bc9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-797db4bf78-znlsn_6fa156f6-505b-4ad3-b8e7-b66291338bc9/console/0.log" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091179 4829 generic.go:334] "Generic (PLEG): container finished" podID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" exitCode=2 Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091224 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerDied","Data":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091264 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-797db4bf78-znlsn" event={"ID":"6fa156f6-505b-4ad3-b8e7-b66291338bc9","Type":"ContainerDied","Data":"bfae83dcdb0a183b25666f792e4baf03784ae0581990e298c8186a70a2bee65f"} Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091292 4829 scope.go:117] "RemoveContainer" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.091497 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-797db4bf78-znlsn" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.130783 4829 scope.go:117] "RemoveContainer" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: E0217 16:11:43.132025 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": container with ID starting with bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e not found: ID does not exist" containerID="bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.132078 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e"} err="failed to get container status \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": rpc error: code = NotFound desc = could not find container \"bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e\": container with ID starting with bf2acd7cbbb8715271add26e2974beb4d31b065808198e205d79e2e86a9ec60e not found: ID does not exist" Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.137118 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:43 crc kubenswrapper[4829]: I0217 16:11:43.141781 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-797db4bf78-znlsn"] Feb 17 16:11:44 crc kubenswrapper[4829]: I0217 16:11:44.291962 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" path="/var/lib/kubelet/pods/6fa156f6-505b-4ad3-b8e7-b66291338bc9/volumes" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.119525 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: E0217 16:11:48.120491 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.120513 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.120885 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa156f6-505b-4ad3-b8e7-b66291338bc9" containerName="console" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.123231 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.133977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.140311 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216138 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.216232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318397 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318466 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.318503 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.319256 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.319490 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.351824 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.448290 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.456660 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:48 crc kubenswrapper[4829]: I0217 16:11:48.918990 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px"] Feb 17 16:11:48 crc kubenswrapper[4829]: W0217 16:11:48.928231 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ecbb28_5618_4f33_9125_c0372c407b89.slice/crio-72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7 WatchSource:0}: Error finding container 72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7: Status 404 returned error can't find the container with id 72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7 Feb 17 16:11:49 crc kubenswrapper[4829]: I0217 16:11:49.148879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerStarted","Data":"e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f"} Feb 17 16:11:49 crc kubenswrapper[4829]: I0217 16:11:49.149158 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerStarted","Data":"72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7"} Feb 17 16:11:50 crc kubenswrapper[4829]: I0217 16:11:50.170865 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f" exitCode=0 Feb 17 16:11:50 crc kubenswrapper[4829]: I0217 16:11:50.171196 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"e5a090887047ff949511ebf53bfef356ac292bb111d8019a8508d2c548f8590f"} Feb 17 16:11:53 crc kubenswrapper[4829]: I0217 16:11:53.195954 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="595ccae63b4f2be9a50ce2e039446a2c09503ab4c57fe55384f3b7577856f2f5" exitCode=0 Feb 17 16:11:53 crc kubenswrapper[4829]: I0217 16:11:53.196065 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"595ccae63b4f2be9a50ce2e039446a2c09503ab4c57fe55384f3b7577856f2f5"} Feb 17 16:11:54 crc kubenswrapper[4829]: I0217 16:11:54.205016 4829 generic.go:334] "Generic (PLEG): container finished" podID="63ecbb28-5618-4f33-9125-c0372c407b89" containerID="8f9ea8944c3ea357e608b23d3e385077f9d06f003cc95e5fb8fddac21c046991" exitCode=0 Feb 17 16:11:54 crc kubenswrapper[4829]: I0217 16:11:54.205061 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"8f9ea8944c3ea357e608b23d3e385077f9d06f003cc95e5fb8fddac21c046991"} Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.567159 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647838 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647914 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.647949 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") pod \"63ecbb28-5618-4f33-9125-c0372c407b89\" (UID: \"63ecbb28-5618-4f33-9125-c0372c407b89\") " Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.648860 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle" (OuterVolumeSpecName: "bundle") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.654667 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h" (OuterVolumeSpecName: "kube-api-access-68t8h") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "kube-api-access-68t8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.665652 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util" (OuterVolumeSpecName: "util") pod "63ecbb28-5618-4f33-9125-c0372c407b89" (UID: "63ecbb28-5618-4f33-9125-c0372c407b89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750911 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68t8h\" (UniqueName: \"kubernetes.io/projected/63ecbb28-5618-4f33-9125-c0372c407b89-kube-api-access-68t8h\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750966 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:55 crc kubenswrapper[4829]: I0217 16:11:55.750987 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63ecbb28-5618-4f33-9125-c0372c407b89-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" event={"ID":"63ecbb28-5618-4f33-9125-c0372c407b89","Type":"ContainerDied","Data":"72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7"} Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px" Feb 17 16:11:56 crc kubenswrapper[4829]: I0217 16:11:56.227871 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72279b1405b20f44528eaa5485fa262456d6ca56e10ed2312a3b978b2deea5e7" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.837897 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838687 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="util" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838700 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="util" Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838712 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="pull" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838718 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="pull" Feb 17 16:12:06 crc kubenswrapper[4829]: E0217 16:12:06.838740 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838746 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.838862 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ecbb28-5618-4f33-9125-c0372c407b89" containerName="extract" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.839376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.841186 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.850880 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852504 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852591 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xzx6f" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.852655 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.857452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971300 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:06 crc kubenswrapper[4829]: I0217 16:12:06.971455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.072895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.072986 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.073048 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.080519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.080655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-webhook-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.081782 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.085176 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5cf20c6-9fae-4c85-9c16-53e313c04cda-apiservice-cert\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087034 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087240 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mjkpp" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.087428 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.103279 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5p2\" (UniqueName: \"kubernetes.io/projected/c5cf20c6-9fae-4c85-9c16-53e313c04cda-kube-api-access-bd5p2\") pod \"metallb-operator-controller-manager-848c6d5b-p864p\" (UID: \"c5cf20c6-9fae-4c85-9c16-53e313c04cda\") " pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.114356 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.158613 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.173725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.174068 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.174122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275062 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.275127 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.281677 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-webhook-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.292359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wsr\" (UniqueName: \"kubernetes.io/projected/90b368e2-73a9-4594-8428-e17a7bb1e499-kube-api-access-j8wsr\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.301417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90b368e2-73a9-4594-8428-e17a7bb1e499-apiservice-cert\") pod \"metallb-operator-webhook-server-6bd8598c46-74wvs\" (UID: \"90b368e2-73a9-4594-8428-e17a7bb1e499\") " pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.473629 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.649123 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848c6d5b-p864p"] Feb 17 16:12:07 crc kubenswrapper[4829]: I0217 16:12:07.982710 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs"] Feb 17 16:12:07 crc kubenswrapper[4829]: W0217 16:12:07.987960 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90b368e2_73a9_4594_8428_e17a7bb1e499.slice/crio-dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e WatchSource:0}: Error finding container dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e: Status 404 returned error can't find the container with id dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e Feb 17 16:12:08 crc kubenswrapper[4829]: I0217 16:12:08.337007 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" event={"ID":"c5cf20c6-9fae-4c85-9c16-53e313c04cda","Type":"ContainerStarted","Data":"2b0410ba236172b8a0e4828a66fd1d5b9725a457e8a70eb39b1fc87534f20fa6"} Feb 17 16:12:08 crc kubenswrapper[4829]: I0217 16:12:08.339378 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" event={"ID":"90b368e2-73a9-4594-8428-e17a7bb1e499","Type":"ContainerStarted","Data":"dce395b3113f65ffabeb97442430149ba5646eabefee964ab46c1169b716168e"} Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.362237 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" event={"ID":"c5cf20c6-9fae-4c85-9c16-53e313c04cda","Type":"ContainerStarted","Data":"586ba4aa8780242b2c8d89354a083d24911e53f5e530276a1cdc345f3f39f253"} Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.363779 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:11 crc kubenswrapper[4829]: I0217 16:12:11.389281 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" podStartSLOduration=2.008957725 podStartE2EDuration="5.389258577s" podCreationTimestamp="2026-02-17 16:12:06 +0000 UTC" firstStartedPulling="2026-02-17 16:12:07.671063735 +0000 UTC m=+1040.088081713" lastFinishedPulling="2026-02-17 16:12:11.051364587 +0000 UTC m=+1043.468382565" observedRunningTime="2026-02-17 16:12:11.379216858 +0000 UTC m=+1043.796234836" watchObservedRunningTime="2026-02-17 16:12:11.389258577 +0000 UTC m=+1043.806276555" Feb 17 16:12:13 crc kubenswrapper[4829]: I0217 16:12:13.393860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" event={"ID":"90b368e2-73a9-4594-8428-e17a7bb1e499","Type":"ContainerStarted","Data":"30941ca2c2a4ab1dbc253a918d2e520afd56f2324ae307cbfda9f40ad1132d02"} Feb 17 16:12:13 crc kubenswrapper[4829]: I0217 16:12:13.394212 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:27 crc kubenswrapper[4829]: I0217 16:12:27.501615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" Feb 17 16:12:27 crc kubenswrapper[4829]: I0217 16:12:27.537491 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" podStartSLOduration=15.895978829 podStartE2EDuration="20.537472363s" podCreationTimestamp="2026-02-17 16:12:07 +0000 UTC" firstStartedPulling="2026-02-17 16:12:07.993653015 +0000 UTC m=+1040.410671003" lastFinishedPulling="2026-02-17 16:12:12.635146549 +0000 UTC m=+1045.052164537" observedRunningTime="2026-02-17 16:12:13.419180286 +0000 UTC m=+1045.836198264" watchObservedRunningTime="2026-02-17 16:12:27.537472363 +0000 UTC m=+1059.954490341" Feb 17 16:12:47 crc kubenswrapper[4829]: I0217 16:12:47.162476 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-848c6d5b-p864p" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.192649 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7qwft"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.195617 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.199627 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.199646 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w5psx" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.205274 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.210143 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.211248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.217791 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.218345 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.320439 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8gr6k"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.322222 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zzhzt" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327249 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327212 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.327429 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.344481 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.346175 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.348758 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.364713 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366870 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366901 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366946 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.366964 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367077 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.367121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468322 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468388 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468467 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468489 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468511 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468548 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468628 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468674 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468730 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468785 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.468822 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-conf\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-sockets\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470450 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.470637 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-reloader\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.471084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.471176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.475235 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.475235 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.476808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ddfc374-12f8-443a-bcc1-526613e031bf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.481002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-frr-startup\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.481069 4829 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.481112 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs podName:901c7cfc-f3f1-470c-bd1f-47ab57bb1b53 nodeName:}" failed. No retries permitted until 2026-02-17 16:12:48.981100151 +0000 UTC m=+1081.398118129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs") pod "frr-k8s-7qwft" (UID: "901c7cfc-f3f1-470c-bd1f-47ab57bb1b53") : secret "frr-k8s-certs-secret" not found Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.489310 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfzw\" (UniqueName: \"kubernetes.io/projected/8ddfc374-12f8-443a-bcc1-526613e031bf-kube-api-access-mtfzw\") pod \"frr-k8s-webhook-server-78b44bf5bb-l8gzk\" (UID: \"8ddfc374-12f8-443a-bcc1-526613e031bf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.504121 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45fdg\" (UniqueName: \"kubernetes.io/projected/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-kube-api-access-45fdg\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.572660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.572729 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573229 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573268 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573288 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.573369 4829 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:12:48 crc kubenswrapper[4829]: E0217 16:12:48.573422 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist podName:a25680cc-e984-4ad7-95e2-3fe561a5fa8c nodeName:}" failed. No retries permitted until 2026-02-17 16:12:49.073407545 +0000 UTC m=+1081.490425523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist") pod "speaker-8gr6k" (UID: "a25680cc-e984-4ad7-95e2-3fe561a5fa8c") : secret "metallb-memberlist" not found Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573375 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.573463 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.574468 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metallb-excludel2\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.580843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.581102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-metrics-certs\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.581345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-metrics-certs\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.583279 4829 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-w5psx" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.588178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1da62b69-54b6-4041-885f-acda828405c9-cert\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.588356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2rr\" (UniqueName: \"kubernetes.io/projected/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-kube-api-access-ll2rr\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.591491 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.591820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv92\" (UniqueName: \"kubernetes.io/projected/1da62b69-54b6-4041-885f-acda828405c9-kube-api-access-wwv92\") pod \"controller-69bbfbf88f-g4znl\" (UID: \"1da62b69-54b6-4041-885f-acda828405c9\") " pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.660866 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.993861 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:48 crc kubenswrapper[4829]: I0217 16:12:48.999014 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/901c7cfc-f3f1-470c-bd1f-47ab57bb1b53-metrics-certs\") pod \"frr-k8s-7qwft\" (UID: \"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53\") " pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:49 crc kubenswrapper[4829]: W0217 16:12:49.055160 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ddfc374_12f8_443a_bcc1_526613e031bf.slice/crio-b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f WatchSource:0}: Error finding container b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f: Status 404 returned error can't find the container with id b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.057640 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.058216 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk"] Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.095945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:49 crc kubenswrapper[4829]: E0217 16:12:49.096094 4829 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:12:49 crc kubenswrapper[4829]: E0217 16:12:49.096146 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist podName:a25680cc-e984-4ad7-95e2-3fe561a5fa8c nodeName:}" failed. No retries permitted until 2026-02-17 16:12:50.096132737 +0000 UTC m=+1082.513150715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist") pod "speaker-8gr6k" (UID: "a25680cc-e984-4ad7-95e2-3fe561a5fa8c") : secret "metallb-memberlist" not found Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.136511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-g4znl"] Feb 17 16:12:49 crc kubenswrapper[4829]: W0217 16:12:49.141444 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1da62b69_54b6_4041_885f_acda828405c9.slice/crio-317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8 WatchSource:0}: Error finding container 317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8: Status 404 returned error can't find the container with id 317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8 Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.156197 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.779252 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" event={"ID":"8ddfc374-12f8-443a-bcc1-526613e031bf","Type":"ContainerStarted","Data":"b10a0bd3ad428ec8111d4c274fae38178dce05bb138be0a39e03a2c66fa8655f"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782625 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"1f2b4d973a38190c89afc29f0404e56be82795fa6683effe3aa96ddfcaa047d7"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"ed0f7057f2dd25efde919280825925dc683bd3674509d9c4a96f4c60a7d6bcf5"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.782686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-g4znl" event={"ID":"1da62b69-54b6-4041-885f-acda828405c9","Type":"ContainerStarted","Data":"317c38c444ff391b31d9375209d848974827ed8decc283eeb3be44359688e8e8"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.783030 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.783916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"8d5e40f95b8b32b0e4659116a384009375ce7f0a242497af27a6ecf9f27201a2"} Feb 17 16:12:49 crc kubenswrapper[4829]: I0217 16:12:49.810694 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-g4znl" podStartSLOduration=1.8106727089999999 podStartE2EDuration="1.810672709s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:12:49.80249101 +0000 UTC m=+1082.219508998" watchObservedRunningTime="2026-02-17 16:12:49.810672709 +0000 UTC m=+1082.227690697" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.140532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.149079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a25680cc-e984-4ad7-95e2-3fe561a5fa8c-memberlist\") pod \"speaker-8gr6k\" (UID: \"a25680cc-e984-4ad7-95e2-3fe561a5fa8c\") " pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.445148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:50 crc kubenswrapper[4829]: W0217 16:12:50.492755 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25680cc_e984_4ad7_95e2_3fe561a5fa8c.slice/crio-706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6 WatchSource:0}: Error finding container 706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6: Status 404 returned error can't find the container with id 706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6 Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.802584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"e8880e7320f84ab2c9dbdc4a1ce02de55071649f1b72fe7eb03867b5e90bff76"} Feb 17 16:12:50 crc kubenswrapper[4829]: I0217 16:12:50.803575 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"706d4c8ebc122c46ff744d8fff0a748c185863eeaa00d58c1d2e4f1006c2e6c6"} Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.818303 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gr6k" event={"ID":"a25680cc-e984-4ad7-95e2-3fe561a5fa8c","Type":"ContainerStarted","Data":"8437d6e9c831510743064901310618af296374f0903064abe7e5a40242e2b96e"} Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.818425 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gr6k" Feb 17 16:12:51 crc kubenswrapper[4829]: I0217 16:12:51.841048 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8gr6k" podStartSLOduration=3.841032461 podStartE2EDuration="3.841032461s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:12:51.838300608 +0000 UTC m=+1084.255318586" watchObservedRunningTime="2026-02-17 16:12:51.841032461 +0000 UTC m=+1084.258050439" Feb 17 16:12:52 crc kubenswrapper[4829]: I0217 16:12:52.424818 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:52 crc kubenswrapper[4829]: I0217 16:12:52.424876 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.871967 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" event={"ID":"8ddfc374-12f8-443a-bcc1-526613e031bf","Type":"ContainerStarted","Data":"00836c2fbad67147f5669bc2e2110be71ba1eb87ab8b6c03f17d00b665ad892e"} Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874122 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874630 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="0e3ca35e5382f1b19ce9e6905d010989593420d7ecacee9dba37295db690f677" exitCode=0 Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.874706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"0e3ca35e5382f1b19ce9e6905d010989593420d7ecacee9dba37295db690f677"} Feb 17 16:12:57 crc kubenswrapper[4829]: I0217 16:12:57.906529 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" podStartSLOduration=2.104617665 podStartE2EDuration="9.906496352s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.057294231 +0000 UTC m=+1081.474312219" lastFinishedPulling="2026-02-17 16:12:56.859172928 +0000 UTC m=+1089.276190906" observedRunningTime="2026-02-17 16:12:57.899035612 +0000 UTC m=+1090.316053630" watchObservedRunningTime="2026-02-17 16:12:57.906496352 +0000 UTC m=+1090.323514370" Feb 17 16:12:58 crc kubenswrapper[4829]: I0217 16:12:58.897331 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="682ae7384a37d88e27884ddce5f3b338f9aa4fc29ac807fdbbb7139c0cb56e6f" exitCode=0 Feb 17 16:12:58 crc kubenswrapper[4829]: I0217 16:12:58.898657 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"682ae7384a37d88e27884ddce5f3b338f9aa4fc29ac807fdbbb7139c0cb56e6f"} Feb 17 16:12:59 crc kubenswrapper[4829]: I0217 16:12:59.905760 4829 generic.go:334] "Generic (PLEG): container finished" podID="901c7cfc-f3f1-470c-bd1f-47ab57bb1b53" containerID="d6ebf9b0c6b3aa3c2de9a8e95d635483695be50bf07e29cf4a1d04a743aa6113" exitCode=0 Feb 17 16:12:59 crc kubenswrapper[4829]: I0217 16:12:59.905847 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerDied","Data":"d6ebf9b0c6b3aa3c2de9a8e95d635483695be50bf07e29cf4a1d04a743aa6113"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.449280 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gr6k" Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.917809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"463ab0bbb16bb92261c15e48f9ae939fb135ebcb5f3df50b11d1cbd134fcf318"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918099 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"05478e29648db01f7b6c736aa5a45a4903a2ab55899a73fa68c92fd5bb871b3a"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"7620e4186e15ecc26087bf64d5d082c690cd3a3c7702b0f1bc3c289869be07d5"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"9d2be24f1bf8eddc184ee056770427a8ecbbf4a7d83a3a1059d16c84f6231fb3"} Feb 17 16:13:00 crc kubenswrapper[4829]: I0217 16:13:00.918138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"420e8a9a9375d5c975e01310feff342eccd9a4b0f903e3093ef5d7b3aab9963e"} Feb 17 16:13:01 crc kubenswrapper[4829]: I0217 16:13:01.939631 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7qwft" event={"ID":"901c7cfc-f3f1-470c-bd1f-47ab57bb1b53","Type":"ContainerStarted","Data":"fa476528f7c96eb7e1517034a9892a14173128c6cc9bdf2a801c712232fddea2"} Feb 17 16:13:01 crc kubenswrapper[4829]: I0217 16:13:01.939978 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.157161 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.215671 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:04 crc kubenswrapper[4829]: I0217 16:13:04.250586 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7qwft" podStartSLOduration=8.802922727 podStartE2EDuration="16.250550949s" podCreationTimestamp="2026-02-17 16:12:48 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.388140641 +0000 UTC m=+1081.805158639" lastFinishedPulling="2026-02-17 16:12:56.835768883 +0000 UTC m=+1089.252786861" observedRunningTime="2026-02-17 16:13:01.964874832 +0000 UTC m=+1094.381892820" watchObservedRunningTime="2026-02-17 16:13:04.250550949 +0000 UTC m=+1096.667568927" Feb 17 16:13:08 crc kubenswrapper[4829]: I0217 16:13:08.625693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l8gzk" Feb 17 16:13:08 crc kubenswrapper[4829]: I0217 16:13:08.664940 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-g4znl" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.082219 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.084501 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.087884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mrxbp" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.089315 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.090025 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.097041 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.208560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.310717 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.327375 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlrw\" (UniqueName: \"kubernetes.io/projected/24ddb2b4-4194-4df5-8820-9ea9c405abc7-kube-api-access-8hlrw\") pod \"openstack-operator-index-6p47w\" (UID: \"24ddb2b4-4194-4df5-8820-9ea9c405abc7\") " pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.423502 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:09 crc kubenswrapper[4829]: I0217 16:13:09.874304 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6p47w"] Feb 17 16:13:10 crc kubenswrapper[4829]: I0217 16:13:10.017283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6p47w" event={"ID":"24ddb2b4-4194-4df5-8820-9ea9c405abc7","Type":"ContainerStarted","Data":"e0a0ac14a9ec77ff26e9edd15a2139a3e52e6d3468e83e1b4ee855db09b3b565"} Feb 17 16:13:16 crc kubenswrapper[4829]: I0217 16:13:16.089230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6p47w" event={"ID":"24ddb2b4-4194-4df5-8820-9ea9c405abc7","Type":"ContainerStarted","Data":"455e387075a05389a7b37c16dcbfa2b06e409760fcb396e9c51a87427e0fbc02"} Feb 17 16:13:16 crc kubenswrapper[4829]: I0217 16:13:16.117076 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6p47w" podStartSLOduration=1.680429666 podStartE2EDuration="7.117047498s" podCreationTimestamp="2026-02-17 16:13:09 +0000 UTC" firstStartedPulling="2026-02-17 16:13:09.890467287 +0000 UTC m=+1102.307485305" lastFinishedPulling="2026-02-17 16:13:15.327085119 +0000 UTC m=+1107.744103137" observedRunningTime="2026-02-17 16:13:16.107377305 +0000 UTC m=+1108.524395323" watchObservedRunningTime="2026-02-17 16:13:16.117047498 +0000 UTC m=+1108.534065506" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.161643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7qwft" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.424350 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.424728 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:19 crc kubenswrapper[4829]: I0217 16:13:19.468881 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:20 crc kubenswrapper[4829]: I0217 16:13:20.177260 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6p47w" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.503126 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.505677 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.507734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-27r92" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.513193 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.639995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.640117 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.640312 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742281 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742369 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742410 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.742928 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.743209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.775567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:21 crc kubenswrapper[4829]: I0217 16:13:21.824460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.361427 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj"] Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.425002 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:22 crc kubenswrapper[4829]: I0217 16:13:22.425068 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163439 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="9f874b6512a76eca1a3bf4f47a6e9cb2321418a3f501b2e13072fb2895b465e7" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163474 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"9f874b6512a76eca1a3bf4f47a6e9cb2321418a3f501b2e13072fb2895b465e7"} Feb 17 16:13:23 crc kubenswrapper[4829]: I0217 16:13:23.163498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerStarted","Data":"7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2"} Feb 17 16:13:24 crc kubenswrapper[4829]: I0217 16:13:24.177629 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="4a7c39e048d790718740f3991e6cd1b7b2ff97312edb34c4e151b35c42537a78" exitCode=0 Feb 17 16:13:24 crc kubenswrapper[4829]: I0217 16:13:24.177709 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"4a7c39e048d790718740f3991e6cd1b7b2ff97312edb34c4e151b35c42537a78"} Feb 17 16:13:25 crc kubenswrapper[4829]: I0217 16:13:25.191231 4829 generic.go:334] "Generic (PLEG): container finished" podID="585600e7-9faf-493f-ac02-1e8e489f6955" containerID="01abad8c7a5bbcf5ec651f969643efcad42c80a6f82f3f6928f791cc2511528c" exitCode=0 Feb 17 16:13:25 crc kubenswrapper[4829]: I0217 16:13:25.191291 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"01abad8c7a5bbcf5ec651f969643efcad42c80a6f82f3f6928f791cc2511528c"} Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.544864 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.657708 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") pod \"585600e7-9faf-493f-ac02-1e8e489f6955\" (UID: \"585600e7-9faf-493f-ac02-1e8e489f6955\") " Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.658442 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle" (OuterVolumeSpecName: "bundle") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.663384 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc" (OuterVolumeSpecName: "kube-api-access-pvmjc") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "kube-api-access-pvmjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.678461 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util" (OuterVolumeSpecName: "util") pod "585600e7-9faf-493f-ac02-1e8e489f6955" (UID: "585600e7-9faf-493f-ac02-1e8e489f6955"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759182 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmjc\" (UniqueName: \"kubernetes.io/projected/585600e7-9faf-493f-ac02-1e8e489f6955-kube-api-access-pvmjc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759224 4829 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:26 crc kubenswrapper[4829]: I0217 16:13:26.759236 4829 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585600e7-9faf-493f-ac02-1e8e489f6955-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" event={"ID":"585600e7-9faf-493f-ac02-1e8e489f6955","Type":"ContainerDied","Data":"7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2"} Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208428 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cdac4bae657ecde863d00286854d71ef325ef9fdbe018710481ed2356a481c2" Feb 17 16:13:27 crc kubenswrapper[4829]: I0217 16:13:27.208448 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.270996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271542 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="util" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271555 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="util" Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271584 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271591 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: E0217 16:13:31.271604 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="pull" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271611 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="pull" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.271752 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="585600e7-9faf-493f-ac02-1e8e489f6955" containerName="extract" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.272240 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.286173 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-b4s9w" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.314116 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.440436 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.541892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.558424 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcsl\" (UniqueName: \"kubernetes.io/projected/f5adeb4d-89fb-480c-a429-7cf978198db2-kube-api-access-9bcsl\") pod \"openstack-operator-controller-init-64549bfd8b-ksr2v\" (UID: \"f5adeb4d-89fb-480c-a429-7cf978198db2\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:31 crc kubenswrapper[4829]: I0217 16:13:31.590026 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:32 crc kubenswrapper[4829]: I0217 16:13:32.069187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v"] Feb 17 16:13:32 crc kubenswrapper[4829]: I0217 16:13:32.252031 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" event={"ID":"f5adeb4d-89fb-480c-a429-7cf978198db2","Type":"ContainerStarted","Data":"e8d67f405e6f576148e50ad2ca806792dc299f6c5699fb2d26586da453a1e641"} Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.310085 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" event={"ID":"f5adeb4d-89fb-480c-a429-7cf978198db2","Type":"ContainerStarted","Data":"563df93fbb6d3252ec49b4cdb26cd800d557a0ce2f612159b6fe139e7241c2ff"} Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.310883 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:37 crc kubenswrapper[4829]: I0217 16:13:37.373130 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" podStartSLOduration=1.933274199 podStartE2EDuration="6.373101096s" podCreationTimestamp="2026-02-17 16:13:31 +0000 UTC" firstStartedPulling="2026-02-17 16:13:32.076806831 +0000 UTC m=+1124.493824799" lastFinishedPulling="2026-02-17 16:13:36.516633718 +0000 UTC m=+1128.933651696" observedRunningTime="2026-02-17 16:13:37.369724774 +0000 UTC m=+1129.786742812" watchObservedRunningTime="2026-02-17 16:13:37.373101096 +0000 UTC m=+1129.790119124" Feb 17 16:13:41 crc kubenswrapper[4829]: I0217 16:13:41.593891 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-ksr2v" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.424480 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.425163 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.425217 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.426065 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:13:52 crc kubenswrapper[4829]: I0217 16:13:52.426156 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" gracePeriod=600 Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.461689 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" exitCode=0 Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.461722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8"} Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.462291 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} Feb 17 16:13:53 crc kubenswrapper[4829]: I0217 16:13:53.462321 4829 scope.go:117] "RemoveContainer" containerID="87ad109950860aced869ef158d4a4198d2273e2872547d74b414b2640c294e6b" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.788832 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.791040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.800144 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.808105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.812079 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r2fsv" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.822973 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bfc57" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.839568 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.875843 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.884690 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.886008 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.890603 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nbqhf" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.891353 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.892299 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.895011 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-479nq" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.921897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.921931 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.929466 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.939989 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.961119 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.962248 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.964332 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k8bsk" Feb 17 16:14:01 crc kubenswrapper[4829]: I0217 16:14:01.984821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.020740 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.021936 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022893 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022947 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.022973 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.025361 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xgrh4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.033114 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.036105 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.037316 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.040230 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h26n4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.043346 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.044379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.045444 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.046882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sld5q" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.060978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmhvk\" (UniqueName: \"kubernetes.io/projected/6084260e-35c2-43b5-9606-98e1e0463e98-kube-api-access-nmhvk\") pod \"barbican-operator-controller-manager-868647ff47-dlskg\" (UID: \"6084260e-35c2-43b5-9606-98e1e0463e98\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.076996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.078093 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.079952 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkcx\" (UniqueName: \"kubernetes.io/projected/f3add145-231f-4d7b-b9dd-115026b2a05e-kube-api-access-lxkcx\") pod \"cinder-operator-controller-manager-5d946d989d-w97sk\" (UID: \"f3add145-231f-4d7b-b9dd-115026b2a05e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.090035 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jv49f" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.090155 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.107638 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.115178 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.121530 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125029 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.125160 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.127491 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.131294 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.132858 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.133846 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.138454 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8rf98" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.138751 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qmbqj" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.139924 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.144038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.150239 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.151255 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.154687 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fmb\" (UniqueName: \"kubernetes.io/projected/a711806b-ee8c-4fb8-b5da-da5e90ef06c6-kube-api-access-q4fmb\") pod \"designate-operator-controller-manager-6d8bf5c495-shssw\" (UID: \"a711806b-ee8c-4fb8-b5da-da5e90ef06c6\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.159977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zt6g9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.163359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rnx\" (UniqueName: \"kubernetes.io/projected/bb32d7a2-68ff-4511-a04f-fa09657791db-kube-api-access-k5rnx\") pod \"glance-operator-controller-manager-77987464f4-7j8p7\" (UID: \"bb32d7a2-68ff-4511-a04f-fa09657791db\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.164487 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.165419 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.166460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.167059 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-w6krp" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.170761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.177057 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.184850 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.186653 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.189826 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tws64" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.191397 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.192470 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.201468 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.202541 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.207450 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.217879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.220977 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.221773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-k4c7x" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.221986 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ms8s5" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.224995 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227225 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227261 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227303 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227349 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227386 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.227420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.236286 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.240229 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.256165 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6965\" (UniqueName: \"kubernetes.io/projected/84a22a6b-1fb5-4959-9342-0bcc4b033b68-kube-api-access-z6965\") pod \"horizon-operator-controller-manager-5b9b8895d5-hmtfv\" (UID: \"84a22a6b-1fb5-4959-9342-0bcc4b033b68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.265054 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvcp\" (UniqueName: \"kubernetes.io/projected/dd52262f-900a-4801-8c4c-f79787b6b715-kube-api-access-chvcp\") pod \"heat-operator-controller-manager-69f49c598c-9md4j\" (UID: \"dd52262f-900a-4801-8c4c-f79787b6b715\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.299113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.320643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322200 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322848 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.322925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.323297 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.327032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tbz7q" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329862 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329896 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329927 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.329978 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330048 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330069 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330092 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330134 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.330319 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.330384 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:02.830363839 +0000 UTC m=+1155.247381857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.330750 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mlj48" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.349715 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.350809 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.384376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ft2w\" (UniqueName: \"kubernetes.io/projected/8642cada-3458-43cc-90aa-cf66a1cd6426-kube-api-access-5ft2w\") pod \"manila-operator-controller-manager-54f6768c69-fw4gg\" (UID: \"8642cada-3458-43cc-90aa-cf66a1cd6426\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.385247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsdn\" (UniqueName: \"kubernetes.io/projected/62cfcaa0-5c8a-4a67-95b7-83aa695a8640-kube-api-access-ldsdn\") pod \"keystone-operator-controller-manager-b4d948c87-nksk9\" (UID: \"62cfcaa0-5c8a-4a67-95b7-83aa695a8640\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.391013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrfk\" (UniqueName: \"kubernetes.io/projected/60ea5425-d352-4d97-bedf-f01d07c89949-kube-api-access-tzrfk\") pod \"ironic-operator-controller-manager-554564d7fc-t57qn\" (UID: \"60ea5425-d352-4d97-bedf-f01d07c89949\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.389870 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvn5m\" (UniqueName: \"kubernetes.io/projected/0e275e91-4b6e-419e-b076-a6e221f8a8ac-kube-api-access-nvn5m\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.426098 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431409 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431434 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431455 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431479 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431501 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.431555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.433079 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.433120 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:02.933106613 +0000 UTC m=+1155.350124591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.466292 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kbt\" (UniqueName: \"kubernetes.io/projected/3aab9223-4e3f-4657-afc2-91d0e0948542-kube-api-access-n9kbt\") pod \"neutron-operator-controller-manager-64ddbf8bb-m4df4\" (UID: \"3aab9223-4e3f-4657-afc2-91d0e0948542\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.467096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jplzk\" (UniqueName: \"kubernetes.io/projected/f083cb81-0369-46de-9562-406736ae7e2f-kube-api-access-jplzk\") pod \"nova-operator-controller-manager-567668f5cf-czbvb\" (UID: \"f083cb81-0369-46de-9562-406736ae7e2f\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.473555 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqbz\" (UniqueName: \"kubernetes.io/projected/2237138f-4450-415b-9646-c2ab9f88194a-kube-api-access-kxqbz\") pod \"octavia-operator-controller-manager-69f8888797-ndxcg\" (UID: \"2237138f-4450-415b-9646-c2ab9f88194a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.474271 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46tzn\" (UniqueName: \"kubernetes.io/projected/a1ec01cb-62ae-4855-b830-69f896bfb5a4-kube-api-access-46tzn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.474753 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.475833 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.478809 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6tdx8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.479477 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsx42\" (UniqueName: \"kubernetes.io/projected/72028d3b-7fd0-4b17-b0c2-c92bc7134637-kube-api-access-rsx42\") pod \"ovn-operator-controller-manager-d44cf6b75-mnrxb\" (UID: \"72028d3b-7fd0-4b17-b0c2-c92bc7134637\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.484717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85r7\" (UniqueName: \"kubernetes.io/projected/5b6c89f9-2c4f-4bab-8d8b-cd746acb3426-kube-api-access-g85r7\") pod \"mariadb-operator-controller-manager-6994f66f48-gcxk7\" (UID: \"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.508886 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.534250 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.534343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.555064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.558890 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv5r7\" (UniqueName: \"kubernetes.io/projected/958dea67-d633-4f5c-a18e-2aca1a55020c-kube-api-access-dv5r7\") pod \"placement-operator-controller-manager-8497b45c89-274tg\" (UID: \"958dea67-d633-4f5c-a18e-2aca1a55020c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.564388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6v8f\" (UniqueName: \"kubernetes.io/projected/4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3-kube-api-access-w6v8f\") pod \"swift-operator-controller-manager-68f46476f-thspt\" (UID: \"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.572806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.585692 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.586806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.589935 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ndn4t" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.590377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.603224 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.611964 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.619542 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.637691 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.647101 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.648154 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.649883 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.668067 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.694055 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rdq6s" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.704154 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.730776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.774800 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.796234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.806195 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.810137 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.815745 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfv5\" (UniqueName: \"kubernetes.io/projected/584ed73b-c202-4d41-b884-cd9c279b3c0d-kube-api-access-6qfv5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-8lb5d\" (UID: \"584ed73b-c202-4d41-b884-cd9c279b3c0d\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.818148 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.829639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.846798 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.847058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.847204 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gjtfw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.875470 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.878128 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.878873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.880361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.880706 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.880778 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.880758971 +0000 UTC m=+1156.297776949 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.883953 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bgxbx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.884399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.903713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74g4\" (UniqueName: \"kubernetes.io/projected/23c03a71-fe86-47ad-ae4b-dd49bc07f2b0-kube-api-access-d74g4\") pod \"test-operator-controller-manager-7866795846-zbs8b\" (UID: \"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.915366 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.919367 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jtqq\" (UniqueName: \"kubernetes.io/projected/5239a5a9-e318-4db3-8394-0427d57d4ae5-kube-api-access-9jtqq\") pod \"watcher-operator-controller-manager-5db88f68c-2xmzw\" (UID: \"5239a5a9-e318-4db3-8394-0427d57d4ae5\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.942029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg"] Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987185 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987258 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.987433 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.988025 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: E0217 16:14:02.988189 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.988127842 +0000 UTC m=+1156.405145850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:02 crc kubenswrapper[4829]: I0217 16:14:02.997358 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.005627 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.073894 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.089361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.090744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.091029 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.091650 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.090967 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093015 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.592991115 +0000 UTC m=+1156.010009083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093057 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.093222 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:03.593207021 +0000 UTC m=+1156.010224999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.122550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqwx\" (UniqueName: \"kubernetes.io/projected/eaf75815-7964-4bc0-aeae-d3306764d7f4-kube-api-access-frqwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fht2z\" (UID: \"eaf75815-7964-4bc0-aeae-d3306764d7f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.129655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjfs\" (UniqueName: \"kubernetes.io/projected/aa745829-0443-47a5-8c10-701bd4645505-kube-api-access-rbjfs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.210625 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.225914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.231669 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.580393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" event={"ID":"6084260e-35c2-43b5-9606-98e1e0463e98","Type":"ContainerStarted","Data":"d3410af211ad4c60c6f09d81b3076243ab1ee30ec2fa859ff503f169f38c3570"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.583724 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" event={"ID":"bb32d7a2-68ff-4511-a04f-fa09657791db","Type":"ContainerStarted","Data":"58f581f92c478154f509f0259f6584d596409df4463a4e75721952fa7b252733"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.587262 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" event={"ID":"f3add145-231f-4d7b-b9dd-115026b2a05e","Type":"ContainerStarted","Data":"85171fc1f119509fcc45e3b9bdfc6e138577d5189b233cd292c7574c61ee6e25"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.592171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" event={"ID":"a711806b-ee8c-4fb8-b5da-da5e90ef06c6","Type":"ContainerStarted","Data":"6f04c533082e9c2013e18960e0504788f17d3b4cbda263ec4c5601b14b35aa1f"} Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.598529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.598615 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598840 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598926 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:04.598884877 +0000 UTC m=+1157.015902855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.598989 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.599018 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:04.59900606 +0000 UTC m=+1157.016024038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.617483 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.633510 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.658992 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.904194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.904370 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: E0217 16:14:03.904439 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:05.904422518 +0000 UTC m=+1158.321440496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.963600 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9"] Feb 17 16:14:03 crc kubenswrapper[4829]: I0217 16:14:03.999365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb"] Feb 17 16:14:04 crc kubenswrapper[4829]: W0217 16:14:04.003142 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8642cada_3458_43cc_90aa_cf66a1cd6426.slice/crio-db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e WatchSource:0}: Error finding container db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e: Status 404 returned error can't find the container with id db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.005310 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.005515 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.005554 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.005541368 +0000 UTC m=+1158.422559346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.005844 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.622810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.622853 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623111 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623173 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.62314158 +0000 UTC m=+1159.040159548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623564 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: E0217 16:14:04.623617 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:06.623606112 +0000 UTC m=+1159.040624090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.630520 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.653702 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-274tg"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.672355 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" event={"ID":"60ea5425-d352-4d97-bedf-f01d07c89949","Type":"ContainerStarted","Data":"b25769481bc37e0a5f8c0e1d4fd84083842e28fd72bf6b2df8a783b9358600ea"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.688767 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" event={"ID":"62cfcaa0-5c8a-4a67-95b7-83aa695a8640","Type":"ContainerStarted","Data":"66070a0d3571614bcf2b5f12cf3c4fdc18a5c053996dd16f0fd1acb53fba5a4a"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.720637 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.738025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" event={"ID":"dd52262f-900a-4801-8c4c-f79787b6b715","Type":"ContainerStarted","Data":"f94c4995762de432a8368781f2bde5a94e5519d036b3006064f6fc1a581009c4"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.747433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" event={"ID":"f083cb81-0369-46de-9562-406736ae7e2f","Type":"ContainerStarted","Data":"efbb08583c96fefe42cb25a8046733c7e6fc5c4e228a4deac5dd9ef01ec42d49"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.806899 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-thspt"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.813034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" event={"ID":"84a22a6b-1fb5-4959-9342-0bcc4b033b68","Type":"ContainerStarted","Data":"7ac8aedda18ff4310549ae6c63829785bfb5a36530589d9cd2c9bcfa014b3702"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.845740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" event={"ID":"8642cada-3458-43cc-90aa-cf66a1cd6426","Type":"ContainerStarted","Data":"db6164c86fa4ac695dfddade16d54a9a91f9a0efa286a96c0424833b8958223e"} Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.847424 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg"] Feb 17 16:14:04 crc kubenswrapper[4829]: W0217 16:14:04.853837 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6c89f9_2c4f_4bab_8d8b_cd746acb3426.slice/crio-c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6 WatchSource:0}: Error finding container c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6: Status 404 returned error can't find the container with id c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6 Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.882131 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.893657 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.906746 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.915912 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7"] Feb 17 16:14:04 crc kubenswrapper[4829]: I0217 16:14:04.923273 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zbs8b"] Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.909684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" event={"ID":"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0","Type":"ContainerStarted","Data":"33f9c70afe01e505a4f30007cf2c8d966f92fe5a38d82e008e1f730d77b6816c"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.926440 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" event={"ID":"584ed73b-c202-4d41-b884-cd9c279b3c0d","Type":"ContainerStarted","Data":"e07d17a09927d51e3271887e229f5ed2e371c90e8fd6b19d826a5fd16266c960"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.936786 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" event={"ID":"5239a5a9-e318-4db3-8394-0427d57d4ae5","Type":"ContainerStarted","Data":"1889e69af315b274f62d9360c799393e9edfaa0b671c5288315b1fb26ca98b98"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.938841 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" event={"ID":"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3","Type":"ContainerStarted","Data":"e2aed83c83cbf88c1bb273eeee622bc46b09921dc834970cc3c1ff38b10d42e2"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.943559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" event={"ID":"72028d3b-7fd0-4b17-b0c2-c92bc7134637","Type":"ContainerStarted","Data":"4d4751fed392a63d6b63f9ea9d8699bb2bd433fb65613425a69f784c537189cd"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.944354 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" event={"ID":"3aab9223-4e3f-4657-afc2-91d0e0948542","Type":"ContainerStarted","Data":"b1df749bc136c27e822d99a7a1a3f305efce19ae7529fced4d5026d65d634147"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.945014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" event={"ID":"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426","Type":"ContainerStarted","Data":"c8b360aaa5f565e2b85b259c3a9bcf8c4522d82597aaeec9b93643f264afafc6"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.945698 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" event={"ID":"eaf75815-7964-4bc0-aeae-d3306764d7f4","Type":"ContainerStarted","Data":"71b81c0e0364c4314eac35a90e09cea78ec835b4246f4483eccfb631eb8d9c6d"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.947170 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" event={"ID":"958dea67-d633-4f5c-a18e-2aca1a55020c","Type":"ContainerStarted","Data":"a255871753472c853813d1f36260ab099692af7e6f9a50753b92664e4e6f2c9c"} Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.959895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:05 crc kubenswrapper[4829]: E0217 16:14:05.960083 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:05 crc kubenswrapper[4829]: E0217 16:14:05.960122 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:09.960110258 +0000 UTC m=+1162.377128236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:05 crc kubenswrapper[4829]: I0217 16:14:05.967087 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" event={"ID":"2237138f-4450-415b-9646-c2ab9f88194a","Type":"ContainerStarted","Data":"8bf70cb13d0e908ecc6d38fc39a955e726af63a2a354c739ea093daf51cc0027"} Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.061227 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.061422 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.061499 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.061481557 +0000 UTC m=+1162.478499535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.675602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:06 crc kubenswrapper[4829]: I0217 16:14:06.675669 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.675906 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.675971 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.675953422 +0000 UTC m=+1163.092971400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.676409 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:06 crc kubenswrapper[4829]: E0217 16:14:06.676451 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:10.676440005 +0000 UTC m=+1163.093457983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:09 crc kubenswrapper[4829]: I0217 16:14:09.968210 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:09 crc kubenswrapper[4829]: E0217 16:14:09.968434 4829 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:09 crc kubenswrapper[4829]: E0217 16:14:09.968735 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert podName:0e275e91-4b6e-419e-b076-a6e221f8a8ac nodeName:}" failed. No retries permitted until 2026-02-17 16:14:17.968704344 +0000 UTC m=+1170.385722322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert") pod "infra-operator-controller-manager-79d975b745-vxvp7" (UID: "0e275e91-4b6e-419e-b076-a6e221f8a8ac") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.070185 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.070385 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.070474 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.070451072 +0000 UTC m=+1170.487469050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.681094 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:10 crc kubenswrapper[4829]: I0217 16:14:10.681166 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681313 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681340 4829 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681386 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.68136714 +0000 UTC m=+1171.098385118 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:10 crc kubenswrapper[4829]: E0217 16:14:10.681425 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:18.681403361 +0000 UTC m=+1171.098421359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "metrics-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.047750 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.056154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e275e91-4b6e-419e-b076-a6e221f8a8ac-cert\") pod \"infra-operator-controller-manager-79d975b745-vxvp7\" (UID: \"0e275e91-4b6e-419e-b076-a6e221f8a8ac\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.149630 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.149879 4829 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.149926 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert podName:a1ec01cb-62ae-4855-b830-69f896bfb5a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:34.14991041 +0000 UTC m=+1186.566928398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" (UID: "a1ec01cb-62ae-4855-b830-69f896bfb5a4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.259033 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.259249 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4fmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-shssw_openstack-operators(a711806b-ee8c-4fb8-b5da-da5e90ef06c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.260622 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podUID="a711806b-ee8c-4fb8-b5da-da5e90ef06c6" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.338516 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.763209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.763315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.763475 4829 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: E0217 16:14:18.763652 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs podName:aa745829-0443-47a5-8c10-701bd4645505 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:34.763626605 +0000 UTC m=+1187.180644583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs") pod "openstack-operator-controller-manager-546d579865-h84k8" (UID: "aa745829-0443-47a5-8c10-701bd4645505") : secret "webhook-server-cert" not found Feb 17 16:14:18 crc kubenswrapper[4829]: I0217 16:14:18.782482 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.096592 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.096989 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzrfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-t57qn_openstack-operators(60ea5425-d352-4d97-bedf-f01d07c89949): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.098334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podUID="60ea5425-d352-4d97-bedf-f01d07c89949" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.158651 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podUID="a711806b-ee8c-4fb8-b5da-da5e90ef06c6" Feb 17 16:14:19 crc kubenswrapper[4829]: E0217 16:14:19.158688 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podUID="60ea5425-d352-4d97-bedf-f01d07c89949" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.032239 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.033366 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6v8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-thspt_openstack-operators(4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.034681 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podUID="4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.217243 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podUID="4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.797713 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.797966 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9kbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-m4df4_openstack-operators(3aab9223-4e3f-4657-afc2-91d0e0948542): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:22 crc kubenswrapper[4829]: E0217 16:14:22.799158 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podUID="3aab9223-4e3f-4657-afc2-91d0e0948542" Feb 17 16:14:23 crc kubenswrapper[4829]: E0217 16:14:23.191381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podUID="3aab9223-4e3f-4657-afc2-91d0e0948542" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.394317 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.394870 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ft2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-fw4gg_openstack-operators(8642cada-3458-43cc-90aa-cf66a1cd6426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.396148 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podUID="8642cada-3458-43cc-90aa-cf66a1cd6426" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.953397 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.953587 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jtqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-2xmzw_openstack-operators(5239a5a9-e318-4db3-8394-0427d57d4ae5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:24 crc kubenswrapper[4829]: E0217 16:14:24.955657 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podUID="5239a5a9-e318-4db3-8394-0427d57d4ae5" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.222341 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podUID="5239a5a9-e318-4db3-8394-0427d57d4ae5" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.223187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podUID="8642cada-3458-43cc-90aa-cf66a1cd6426" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.614492 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.614777 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jplzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-czbvb_openstack-operators(f083cb81-0369-46de-9562-406736ae7e2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:25 crc kubenswrapper[4829]: E0217 16:14:25.616193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podUID="f083cb81-0369-46de-9562-406736ae7e2f" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.236862 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podUID="f083cb81-0369-46de-9562-406736ae7e2f" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.270778 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.270978 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv5r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-274tg_openstack-operators(958dea67-d633-4f5c-a18e-2aca1a55020c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:26 crc kubenswrapper[4829]: E0217 16:14:26.272960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podUID="958dea67-d633-4f5c-a18e-2aca1a55020c" Feb 17 16:14:27 crc kubenswrapper[4829]: E0217 16:14:27.238600 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podUID="958dea67-d633-4f5c-a18e-2aca1a55020c" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.076843 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.077363 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g85r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-gcxk7_openstack-operators(5b6c89f9-2c4f-4bab-8d8b-cd746acb3426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.078977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podUID="5b6c89f9-2c4f-4bab-8d8b-cd746acb3426" Feb 17 16:14:29 crc kubenswrapper[4829]: E0217 16:14:29.266041 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podUID="5b6c89f9-2c4f-4bab-8d8b-cd746acb3426" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.208188 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.208606 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d74g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-zbs8b_openstack-operators(23c03a71-fe86-47ad-ae4b-dd49bc07f2b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.210105 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podUID="23c03a71-fe86-47ad-ae4b-dd49bc07f2b0" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.280314 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podUID="23c03a71-fe86-47ad-ae4b-dd49bc07f2b0" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.968619 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.969096 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6965,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-hmtfv_openstack-operators(84a22a6b-1fb5-4959-9342-0bcc4b033b68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:31 crc kubenswrapper[4829]: E0217 16:14:31.970356 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podUID="84a22a6b-1fb5-4959-9342-0bcc4b033b68" Feb 17 16:14:32 crc kubenswrapper[4829]: E0217 16:14:32.289127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podUID="84a22a6b-1fb5-4959-9342-0bcc4b033b68" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.865822 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.866314 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rsx42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-mnrxb_openstack-operators(72028d3b-7fd0-4b17-b0c2-c92bc7134637): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:33 crc kubenswrapper[4829]: E0217 16:14:33.867545 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podUID="72028d3b-7fd0-4b17-b0c2-c92bc7134637" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.166365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.174887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1ec01cb-62ae-4855-b830-69f896bfb5a4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx\" (UID: \"a1ec01cb-62ae-4855-b830-69f896bfb5a4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.182878 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.310200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podUID="72028d3b-7fd0-4b17-b0c2-c92bc7134637" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.430430 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.430815 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-chvcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-9md4j_openstack-operators(dd52262f-900a-4801-8c4c-f79787b6b715): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:34 crc kubenswrapper[4829]: E0217 16:14:34.432092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podUID="dd52262f-900a-4801-8c4c-f79787b6b715" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.776101 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.793961 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa745829-0443-47a5-8c10-701bd4645505-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-h84k8\" (UID: \"aa745829-0443-47a5-8c10-701bd4645505\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:34 crc kubenswrapper[4829]: I0217 16:14:34.987920 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.316802 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podUID="dd52262f-900a-4801-8c4c-f79787b6b715" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.803009 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.803721 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-dlskg_openstack-operators(6084260e-35c2-43b5-9606-98e1e0463e98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:35 crc kubenswrapper[4829]: E0217 16:14:35.805013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podUID="6084260e-35c2-43b5-9606-98e1e0463e98" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.326404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podUID="6084260e-35c2-43b5-9606-98e1e0463e98" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.505901 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.506083 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxqbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-ndxcg_openstack-operators(2237138f-4450-415b-9646-c2ab9f88194a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:36 crc kubenswrapper[4829]: E0217 16:14:36.507310 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podUID="2237138f-4450-415b-9646-c2ab9f88194a" Feb 17 16:14:37 crc kubenswrapper[4829]: E0217 16:14:37.333339 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podUID="2237138f-4450-415b-9646-c2ab9f88194a" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.829806 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.830246 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldsdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-nksk9_openstack-operators(62cfcaa0-5c8a-4a67-95b7-83aa695a8640): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:38 crc kubenswrapper[4829]: E0217 16:14:38.831435 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podUID="62cfcaa0-5c8a-4a67-95b7-83aa695a8640" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326133 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326240 4829 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.326446 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qfv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-66fcc5ff49-8lb5d_openstack-operators(584ed73b-c202-4d41-b884-cd9c279b3c0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.328121 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podUID="584ed73b-c202-4d41-b884-cd9c279b3c0d" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.355150 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podUID="62cfcaa0-5c8a-4a67-95b7-83aa695a8640" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.355430 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podUID="584ed73b-c202-4d41-b884-cd9c279b3c0d" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.891711 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.892040 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frqwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fht2z_openstack-operators(eaf75815-7964-4bc0-aeae-d3306764d7f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:39 crc kubenswrapper[4829]: E0217 16:14:39.894535 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podUID="eaf75815-7964-4bc0-aeae-d3306764d7f4" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.362967 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" event={"ID":"bb32d7a2-68ff-4511-a04f-fa09657791db","Type":"ContainerStarted","Data":"44e302407bf42f169a81f99c7f85f66a40c74db306e94e1e5459b6862f389921"} Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.363376 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:40 crc kubenswrapper[4829]: E0217 16:14:40.367552 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podUID="eaf75815-7964-4bc0-aeae-d3306764d7f4" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.444758 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" podStartSLOduration=3.251040083 podStartE2EDuration="39.444742229s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.006936933 +0000 UTC m=+1155.423954911" lastFinishedPulling="2026-02-17 16:14:39.200639069 +0000 UTC m=+1191.617657057" observedRunningTime="2026-02-17 16:14:40.398161047 +0000 UTC m=+1192.815179025" watchObservedRunningTime="2026-02-17 16:14:40.444742229 +0000 UTC m=+1192.861760207" Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.477042 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx"] Feb 17 16:14:40 crc kubenswrapper[4829]: W0217 16:14:40.477394 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ec01cb_62ae_4855_b830_69f896bfb5a4.slice/crio-c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3 WatchSource:0}: Error finding container c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3: Status 404 returned error can't find the container with id c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3 Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.629331 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-h84k8"] Feb 17 16:14:40 crc kubenswrapper[4829]: I0217 16:14:40.733120 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7"] Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.385797 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" event={"ID":"3aab9223-4e3f-4657-afc2-91d0e0948542","Type":"ContainerStarted","Data":"e6a05a16598fcc712e79333bd8ec370bd28c9c6434cc4bd780516ded76b24202"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.386313 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.398931 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" event={"ID":"5239a5a9-e318-4db3-8394-0427d57d4ae5","Type":"ContainerStarted","Data":"5287f7ab06362448cad1ac5b6179ebfff1bed7065b50eec0570cc90de28093ed"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.399139 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.401992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" event={"ID":"f3add145-231f-4d7b-b9dd-115026b2a05e","Type":"ContainerStarted","Data":"b5e8f8d786bd77c40771ed73d08dda00030fefe31e45537d562efb4a51314225"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.402096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.404669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" podStartSLOduration=5.143086624 podStartE2EDuration="40.404658989s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813290652 +0000 UTC m=+1157.230308630" lastFinishedPulling="2026-02-17 16:14:40.074863017 +0000 UTC m=+1192.491880995" observedRunningTime="2026-02-17 16:14:41.404378552 +0000 UTC m=+1193.821396530" watchObservedRunningTime="2026-02-17 16:14:41.404658989 +0000 UTC m=+1193.821676967" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.409289 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" event={"ID":"60ea5425-d352-4d97-bedf-f01d07c89949","Type":"ContainerStarted","Data":"7a1eb64704035c19912673695c845fd607ba6a92e81fbd5aaae355adb31fcbdb"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.409487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.412830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" event={"ID":"4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3","Type":"ContainerStarted","Data":"ce342504e318f487bd4bb96fb5e26484b68657d130564c90095d14710ec175b1"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.413022 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.414935 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" event={"ID":"a711806b-ee8c-4fb8-b5da-da5e90ef06c6","Type":"ContainerStarted","Data":"9673f23a882744c4fb3ae306fe1a79929982bc582d496145935cc0d12a9c6ca6"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.415068 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.416498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" event={"ID":"f083cb81-0369-46de-9562-406736ae7e2f","Type":"ContainerStarted","Data":"94125c94c9fa67af553ae8d19e67730d90936476077f3560dc3ba8a25fe9993d"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.417215 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420836 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" event={"ID":"aa745829-0443-47a5-8c10-701bd4645505","Type":"ContainerStarted","Data":"8f9293ea2a4503e3a6a9ce101256db803957c7d61382257a1375ce64d2e3c2e7"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" event={"ID":"aa745829-0443-47a5-8c10-701bd4645505","Type":"ContainerStarted","Data":"6f50a80e19745b9a332663211ea78b8ba7ff6dad4a9d4dee8831d248156b21d7"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.420903 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.423053 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" podStartSLOduration=3.875894614 podStartE2EDuration="40.423038074s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.361638533 +0000 UTC m=+1155.778656511" lastFinishedPulling="2026-02-17 16:14:39.908781993 +0000 UTC m=+1192.325799971" observedRunningTime="2026-02-17 16:14:41.420907136 +0000 UTC m=+1193.837925114" watchObservedRunningTime="2026-02-17 16:14:41.423038074 +0000 UTC m=+1193.840056052" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.427204 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" event={"ID":"8642cada-3458-43cc-90aa-cf66a1cd6426","Type":"ContainerStarted","Data":"e15d4312827b1945f9e0486773b8c0b032d6b8d88b139de4027a0c33ae8dc831"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.427526 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.429153 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" event={"ID":"0e275e91-4b6e-419e-b076-a6e221f8a8ac","Type":"ContainerStarted","Data":"509b47f2ee0a1479489a30b875afee6ce1de270c0c6c3179e0d0a884b5eb0790"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.432129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" event={"ID":"a1ec01cb-62ae-4855-b830-69f896bfb5a4","Type":"ContainerStarted","Data":"c5595e4d258a091f625f533d8264b9046b79d1f651768c70f484850eae1b16b3"} Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.434955 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" podStartSLOduration=4.230409141 podStartE2EDuration="39.434942113s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.812768608 +0000 UTC m=+1157.229786586" lastFinishedPulling="2026-02-17 16:14:40.01730158 +0000 UTC m=+1192.434319558" observedRunningTime="2026-02-17 16:14:41.432344003 +0000 UTC m=+1193.849361981" watchObservedRunningTime="2026-02-17 16:14:41.434942113 +0000 UTC m=+1193.851960091" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.456764 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" podStartSLOduration=4.399146141 podStartE2EDuration="40.45674574s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.01810966 +0000 UTC m=+1156.435127638" lastFinishedPulling="2026-02-17 16:14:40.075709259 +0000 UTC m=+1192.492727237" observedRunningTime="2026-02-17 16:14:41.456213355 +0000 UTC m=+1193.873231333" watchObservedRunningTime="2026-02-17 16:14:41.45674574 +0000 UTC m=+1193.873763718" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.493090 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" podStartSLOduration=4.2196628369999996 podStartE2EDuration="39.493069776s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.730969553 +0000 UTC m=+1157.147987531" lastFinishedPulling="2026-02-17 16:14:40.004376492 +0000 UTC m=+1192.421394470" observedRunningTime="2026-02-17 16:14:41.485926133 +0000 UTC m=+1193.902944111" watchObservedRunningTime="2026-02-17 16:14:41.493069776 +0000 UTC m=+1193.910087754" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.522268 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" podStartSLOduration=39.52224952 podStartE2EDuration="39.52224952s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:41.515849068 +0000 UTC m=+1193.932867046" watchObservedRunningTime="2026-02-17 16:14:41.52224952 +0000 UTC m=+1193.939267498" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.538669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" podStartSLOduration=4.236083767 podStartE2EDuration="40.538653251s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.641661261 +0000 UTC m=+1156.058679239" lastFinishedPulling="2026-02-17 16:14:39.944230745 +0000 UTC m=+1192.361248723" observedRunningTime="2026-02-17 16:14:41.536288197 +0000 UTC m=+1193.953306165" watchObservedRunningTime="2026-02-17 16:14:41.538653251 +0000 UTC m=+1193.955671229" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.561605 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" podStartSLOduration=4.534594267 podStartE2EDuration="40.561590847s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.005991001 +0000 UTC m=+1156.423008979" lastFinishedPulling="2026-02-17 16:14:40.032987581 +0000 UTC m=+1192.450005559" observedRunningTime="2026-02-17 16:14:41.558194416 +0000 UTC m=+1193.975212394" watchObservedRunningTime="2026-02-17 16:14:41.561590847 +0000 UTC m=+1193.978608825" Feb 17 16:14:41 crc kubenswrapper[4829]: I0217 16:14:41.584536 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" podStartSLOduration=3.849010892 podStartE2EDuration="40.584516544s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.26999781 +0000 UTC m=+1155.687015788" lastFinishedPulling="2026-02-17 16:14:40.005503462 +0000 UTC m=+1192.422521440" observedRunningTime="2026-02-17 16:14:41.579146309 +0000 UTC m=+1193.996164287" watchObservedRunningTime="2026-02-17 16:14:41.584516544 +0000 UTC m=+1194.001534522" Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.451168 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" event={"ID":"958dea67-d633-4f5c-a18e-2aca1a55020c","Type":"ContainerStarted","Data":"88293cbf2f1671c36e7f8c0cbf620ce8258bb20c5f7a0c24a5039de005eaccd4"} Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.452592 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:43 crc kubenswrapper[4829]: I0217 16:14:43.473048 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" podStartSLOduration=4.321775336 podStartE2EDuration="41.473030304s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.716092798 +0000 UTC m=+1157.133110776" lastFinishedPulling="2026-02-17 16:14:41.867347756 +0000 UTC m=+1194.284365744" observedRunningTime="2026-02-17 16:14:43.465186913 +0000 UTC m=+1195.882204891" watchObservedRunningTime="2026-02-17 16:14:43.473030304 +0000 UTC m=+1195.890048282" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.489949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" event={"ID":"5b6c89f9-2c4f-4bab-8d8b-cd746acb3426","Type":"ContainerStarted","Data":"29c1a92d22c4ca1ecaea93dedb5f38ae2baad52a9b245632c80ec34b2e8e599c"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.490615 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.491268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" event={"ID":"84a22a6b-1fb5-4959-9342-0bcc4b033b68","Type":"ContainerStarted","Data":"32da7910f9c9c18a966f47442a8fb830ae393db663018d85f7b4d8b379ff45a4"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.491439 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.492668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" event={"ID":"23c03a71-fe86-47ad-ae4b-dd49bc07f2b0","Type":"ContainerStarted","Data":"2165e187fbe350af612738e6419631589c08a52f45771b261a5498605f214f2a"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.492819 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.494417 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" event={"ID":"0e275e91-4b6e-419e-b076-a6e221f8a8ac","Type":"ContainerStarted","Data":"2fd522f7361e535a9b193d19ccbdd8189ba328384534d27c5d01aee1a2c103f7"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.494552 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.495869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" event={"ID":"a1ec01cb-62ae-4855-b830-69f896bfb5a4","Type":"ContainerStarted","Data":"564a220e3437a0e2a4a235820f60a04f246742eb4dfee12a29862a3eb89e72a3"} Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.496036 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.514382 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" podStartSLOduration=4.999781177 podStartE2EDuration="46.514362169s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.855195472 +0000 UTC m=+1157.272213450" lastFinishedPulling="2026-02-17 16:14:46.369776464 +0000 UTC m=+1198.786794442" observedRunningTime="2026-02-17 16:14:47.505995964 +0000 UTC m=+1199.923013952" watchObservedRunningTime="2026-02-17 16:14:47.514362169 +0000 UTC m=+1199.931380157" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.540235 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" podStartSLOduration=40.914486232 podStartE2EDuration="46.540215683s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:40.744102515 +0000 UTC m=+1193.161120493" lastFinishedPulling="2026-02-17 16:14:46.369831966 +0000 UTC m=+1198.786849944" observedRunningTime="2026-02-17 16:14:47.528389446 +0000 UTC m=+1199.945407424" watchObservedRunningTime="2026-02-17 16:14:47.540215683 +0000 UTC m=+1199.957233661" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.557272 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" podStartSLOduration=3.8221028759999998 podStartE2EDuration="46.557251372s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.634024122 +0000 UTC m=+1156.051042100" lastFinishedPulling="2026-02-17 16:14:46.369172618 +0000 UTC m=+1198.786190596" observedRunningTime="2026-02-17 16:14:47.548199538 +0000 UTC m=+1199.965217536" watchObservedRunningTime="2026-02-17 16:14:47.557251372 +0000 UTC m=+1199.974269360" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.588778 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" podStartSLOduration=39.709311687 podStartE2EDuration="45.588757828s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:40.490443466 +0000 UTC m=+1192.907461444" lastFinishedPulling="2026-02-17 16:14:46.369889617 +0000 UTC m=+1198.786907585" observedRunningTime="2026-02-17 16:14:47.585160522 +0000 UTC m=+1200.002178510" watchObservedRunningTime="2026-02-17 16:14:47.588757828 +0000 UTC m=+1200.005775816" Feb 17 16:14:47 crc kubenswrapper[4829]: I0217 16:14:47.604693 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" podStartSLOduration=4.138386159 podStartE2EDuration="45.604676507s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.90549548 +0000 UTC m=+1157.322513458" lastFinishedPulling="2026-02-17 16:14:46.371785828 +0000 UTC m=+1198.788803806" observedRunningTime="2026-02-17 16:14:47.601781728 +0000 UTC m=+1200.018799716" watchObservedRunningTime="2026-02-17 16:14:47.604676507 +0000 UTC m=+1200.021694485" Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.515884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" event={"ID":"72028d3b-7fd0-4b17-b0c2-c92bc7134637","Type":"ContainerStarted","Data":"d82f565b537339cc08b4424bf144ed18cdb420ad45939eeafea50c632a2efd5c"} Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.516593 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:14:49 crc kubenswrapper[4829]: I0217 16:14:49.546006 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" podStartSLOduration=3.281491231 podStartE2EDuration="47.545981615s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.672233625 +0000 UTC m=+1157.089251603" lastFinishedPulling="2026-02-17 16:14:48.936723999 +0000 UTC m=+1201.353741987" observedRunningTime="2026-02-17 16:14:49.541435083 +0000 UTC m=+1201.958453061" watchObservedRunningTime="2026-02-17 16:14:49.545981615 +0000 UTC m=+1201.962999613" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.527691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" event={"ID":"dd52262f-900a-4801-8c4c-f79787b6b715","Type":"ContainerStarted","Data":"2304bec75914d03abfe30afa3a98c2eeb838b02f618e63413cdf3a424ff7d17c"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.528285 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.530442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" event={"ID":"6084260e-35c2-43b5-9606-98e1e0463e98","Type":"ContainerStarted","Data":"257fd65b29ecb0a135895cb8e372e3088279802b47674eedcc1f9aed9f440f0c"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.531311 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.533931 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" event={"ID":"2237138f-4450-415b-9646-c2ab9f88194a","Type":"ContainerStarted","Data":"df3f044bea487993acecd8a1aaf0b36ba2e6e44739e978590a5f7d79aeff183d"} Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.583294 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" podStartSLOduration=3.4873499 podStartE2EDuration="48.583275096s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813104487 +0000 UTC m=+1157.230122465" lastFinishedPulling="2026-02-17 16:14:49.909029663 +0000 UTC m=+1202.326047661" observedRunningTime="2026-02-17 16:14:50.58191906 +0000 UTC m=+1202.998937078" watchObservedRunningTime="2026-02-17 16:14:50.583275096 +0000 UTC m=+1203.000293094" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.586567 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" podStartSLOduration=3.3215519159999998 podStartE2EDuration="49.586554294s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.646647226 +0000 UTC m=+1156.063665204" lastFinishedPulling="2026-02-17 16:14:49.911649604 +0000 UTC m=+1202.328667582" observedRunningTime="2026-02-17 16:14:50.557289317 +0000 UTC m=+1202.974307285" watchObservedRunningTime="2026-02-17 16:14:50.586554294 +0000 UTC m=+1203.003572282" Feb 17 16:14:50 crc kubenswrapper[4829]: I0217 16:14:50.608342 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" podStartSLOduration=2.593888394 podStartE2EDuration="49.608315329s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:02.8957878 +0000 UTC m=+1155.312805778" lastFinishedPulling="2026-02-17 16:14:49.910214715 +0000 UTC m=+1202.327232713" observedRunningTime="2026-02-17 16:14:50.600298584 +0000 UTC m=+1203.017316562" watchObservedRunningTime="2026-02-17 16:14:50.608315329 +0000 UTC m=+1203.025333327" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.170695 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-w97sk" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.229554 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-shssw" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.243225 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7j8p7" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.354969 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-hmtfv" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.430669 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-t57qn" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.575605 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-fw4gg" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.603146 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-gcxk7" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.615190 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m4df4" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.626252 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-czbvb" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.651246 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.733269 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-thspt" Feb 17 16:14:52 crc kubenswrapper[4829]: I0217 16:14:52.807226 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-274tg" Feb 17 16:14:53 crc kubenswrapper[4829]: I0217 16:14:53.011184 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-zbs8b" Feb 17 16:14:53 crc kubenswrapper[4829]: I0217 16:14:53.085254 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2xmzw" Feb 17 16:14:54 crc kubenswrapper[4829]: I0217 16:14:54.196921 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx" Feb 17 16:14:54 crc kubenswrapper[4829]: I0217 16:14:54.995315 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-546d579865-h84k8" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.582813 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" event={"ID":"584ed73b-c202-4d41-b884-cd9c279b3c0d","Type":"ContainerStarted","Data":"a3dfbadaf79b256b9a88b904f4325eb86d9ecc1fa6bf849bd44ee9f840085a1d"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.583298 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.586806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" event={"ID":"62cfcaa0-5c8a-4a67-95b7-83aa695a8640","Type":"ContainerStarted","Data":"fb12d147e287a0d23b1180603855d9346f90298adeb33461cedeaa1c78e5ded9"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.587195 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.595113 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" event={"ID":"eaf75815-7964-4bc0-aeae-d3306764d7f4","Type":"ContainerStarted","Data":"5342185a2f3423e6911215458c7528f4dd254d61e795e2d8462863f544919346"} Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.598754 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" podStartSLOduration=3.526874523 podStartE2EDuration="53.598731894s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.813072476 +0000 UTC m=+1157.230090454" lastFinishedPulling="2026-02-17 16:14:54.884929837 +0000 UTC m=+1207.301947825" observedRunningTime="2026-02-17 16:14:55.596884034 +0000 UTC m=+1208.013902002" watchObservedRunningTime="2026-02-17 16:14:55.598731894 +0000 UTC m=+1208.015749882" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.626243 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" podStartSLOduration=3.728027552 podStartE2EDuration="54.626221213s" podCreationTimestamp="2026-02-17 16:14:01 +0000 UTC" firstStartedPulling="2026-02-17 16:14:03.98869254 +0000 UTC m=+1156.405710518" lastFinishedPulling="2026-02-17 16:14:54.886886161 +0000 UTC m=+1207.303904179" observedRunningTime="2026-02-17 16:14:55.618260899 +0000 UTC m=+1208.035278887" watchObservedRunningTime="2026-02-17 16:14:55.626221213 +0000 UTC m=+1208.043239191" Feb 17 16:14:55 crc kubenswrapper[4829]: I0217 16:14:55.638085 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fht2z" podStartSLOduration=4.336953325 podStartE2EDuration="53.638063531s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:04.812981313 +0000 UTC m=+1157.229999291" lastFinishedPulling="2026-02-17 16:14:54.114091469 +0000 UTC m=+1206.531109497" observedRunningTime="2026-02-17 16:14:55.630861557 +0000 UTC m=+1208.047879535" watchObservedRunningTime="2026-02-17 16:14:55.638063531 +0000 UTC m=+1208.055081509" Feb 17 16:14:58 crc kubenswrapper[4829]: I0217 16:14:58.348531 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-vxvp7" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.153062 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.154964 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.160082 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.160111 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.162018 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302295 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302361 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.302409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404537 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.404616 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.406226 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.413320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.426402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"collect-profiles-29522415-vfscd\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.484200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:00 crc kubenswrapper[4829]: I0217 16:15:00.961346 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 16:15:00 crc kubenswrapper[4829]: W0217 16:15:00.970603 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88fd8a6_9c2a_4529_81eb_5495aa3237c8.slice/crio-323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459 WatchSource:0}: Error finding container 323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459: Status 404 returned error can't find the container with id 323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459 Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648408 4829 generic.go:334] "Generic (PLEG): container finished" podID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerID="595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c" exitCode=0 Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerDied","Data":"595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c"} Feb 17 16:15:01 crc kubenswrapper[4829]: I0217 16:15:01.648702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerStarted","Data":"323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459"} Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.117912 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-dlskg" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.312079 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9md4j" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.559288 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-nksk9" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.655318 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-ndxcg" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.711147 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-mnrxb" Feb 17 16:15:02 crc kubenswrapper[4829]: I0217 16:15:02.823926 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-8lb5d" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.084168 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169441 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169657 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.169735 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") pod \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\" (UID: \"b88fd8a6-9c2a-4529-81eb-5495aa3237c8\") " Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.170591 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.179723 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.179811 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx" (OuterVolumeSpecName: "kube-api-access-9cfdx") pod "b88fd8a6-9c2a-4529-81eb-5495aa3237c8" (UID: "b88fd8a6-9c2a-4529-81eb-5495aa3237c8"). InnerVolumeSpecName "kube-api-access-9cfdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271355 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cfdx\" (UniqueName: \"kubernetes.io/projected/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-kube-api-access-9cfdx\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271387 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.271396 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b88fd8a6-9c2a-4529-81eb-5495aa3237c8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.675683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" event={"ID":"b88fd8a6-9c2a-4529-81eb-5495aa3237c8","Type":"ContainerDied","Data":"323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459"} Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.676002 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323464fce178ba52a06fc9deb27d3123484a703d7393e2a11cb27a5d17efe459" Feb 17 16:15:03 crc kubenswrapper[4829]: I0217 16:15:03.675719 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.601315 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:20 crc kubenswrapper[4829]: E0217 16:15:20.602151 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.602167 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.602395 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" containerName="collect-profiles" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.603449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613240 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-prqgw" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613623 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613702 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613779 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.613796 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.623997 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.671596 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.672865 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.675321 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.689846 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.715181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.715275 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.716431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.756372 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"dnsmasq-dns-675f4bcbfc-wffgx\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.816948 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.816996 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.817033 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.918773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.919115 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.919808 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.920405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.920516 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.934767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.937083 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"dnsmasq-dns-78dd6ddcc-4zwb8\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:20 crc kubenswrapper[4829]: I0217 16:15:20.992896 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.455996 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.542896 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.758543 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" event={"ID":"8d5f50bb-1dbc-4661-91f3-66c29ea7430e","Type":"ContainerStarted","Data":"e7c4359a6a86de75a2f21197c9258209e81a5ec6d1e0f7b03fc162a1d9d53e77"} Feb 17 16:15:21 crc kubenswrapper[4829]: I0217 16:15:21.761195 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" event={"ID":"ffccb67d-5096-4a51-adf3-4bf3739373ea","Type":"ContainerStarted","Data":"cacd8eed3fb0b0769b53687fb7ee29d23d0b51c36a9b2e50197b211f45b0f9c2"} Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.388186 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.405069 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.406460 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.419767 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.582882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.582968 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.583011 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.684634 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.685430 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.686025 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.711552 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"dnsmasq-dns-666b6646f7-drgmb\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.731085 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.732707 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.743300 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.745188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.783176 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.896733 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.897687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:23 crc kubenswrapper[4829]: I0217 16:15:23.897767 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:23.999564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:23.999907 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000403 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.000827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.026296 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"dnsmasq-dns-57d769cc4f-ftmfx\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.175648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.351812 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:15:24 crc kubenswrapper[4829]: W0217 16:15:24.396433 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c13771b_c220_4ce6_9d1c_3c76af499220.slice/crio-ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a WatchSource:0}: Error finding container ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a: Status 404 returned error can't find the container with id ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.558704 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.560612 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.562471 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6sqhz" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.562778 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563358 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563696 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.563922 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.564648 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.564763 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.583329 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.596314 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.597882 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.613939 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.616831 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.638798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.662641 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.693911 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:15:24 crc kubenswrapper[4829]: W0217 16:15:24.701351 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66112eb6_8e4a_4469_8cfd_825bf6b7563d.slice/crio-8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73 WatchSource:0}: Error finding container 8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73: Status 404 returned error can't find the container with id 8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73 Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716614 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716647 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716669 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716718 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716756 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.716799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.718626 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.718720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719216 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719425 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719461 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719528 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719590 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719661 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719712 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719885 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.719998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720042 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720068 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720122 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720654 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.720667 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.821971 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822050 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822113 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822126 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822142 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822177 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822194 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822223 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822257 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822294 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822310 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822347 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822388 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822406 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822422 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822464 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822492 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822563 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.822594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.823641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.823918 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.824463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825043 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825525 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.825850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.826219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.834764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.835357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836170 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836388 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.836402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.837404 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.839247 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840030 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.840999 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.842004 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.842057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843818 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843847 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cec88d4327ff12753cbf1d7636d4616ad5b51e6f71f7c68ee07d08bc8a1cc1e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843869 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.843895 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2fb41440360b87637c863c905d7642fdbb5fac4b43922d0db49761300e3e982/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844345 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844423 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844441 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b279f517412c9d421e4d384ad7a1032e9021db2370e77c854a0ec0125cf75d39/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.844799 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.849300 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.849625 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.852486 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.855436 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.858069 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.861701 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.875250 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerStarted","Data":"ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a"} Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.877314 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.880076 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.883663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" event={"ID":"66112eb6-8e4a-4469-8cfd-825bf6b7563d","Type":"ContainerStarted","Data":"8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73"} Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.883748 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891197 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891443 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891393 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9x5xf" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891581 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.891626 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.894321 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.915038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.917936 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " pod="openstack/rabbitmq-server-0" Feb 17 16:15:24 crc kubenswrapper[4829]: I0217 16:15:24.970295 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " pod="openstack/rabbitmq-server-2" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.014745 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " pod="openstack/rabbitmq-server-1" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.028933 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.028992 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029035 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029099 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.029330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130732 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130821 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130846 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130884 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130910 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130934 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130968 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.130985 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.131547 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132047 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132424 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.132606 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.133719 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.136346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.136649 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.139336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.139834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.142374 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.142411 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c712c179c4211caeb2d08f251b409f456d9a156c71e8c917f92effa050520833/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.159046 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.191132 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.201931 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.225814 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.242292 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.318696 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.706832 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:15:25 crc kubenswrapper[4829]: W0217 16:15:25.884927 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee690a85_cf83_4e55_a69d_ca6bd136bf07.slice/crio-a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc WatchSource:0}: Error finding container a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc: Status 404 returned error can't find the container with id a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc Feb 17 16:15:25 crc kubenswrapper[4829]: W0217 16:15:25.886970 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod328bcfe0_93b6_44bb_83ca_2b3a105f1548.slice/crio-bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928 WatchSource:0}: Error finding container bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928: Status 404 returned error can't find the container with id bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928 Feb 17 16:15:25 crc kubenswrapper[4829]: I0217 16:15:25.908351 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.026817 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.028455 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.030447 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ztmt6" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.031817 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.033331 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.033518 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.037614 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.039188 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:15:26 crc kubenswrapper[4829]: W0217 16:15:26.085487 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257c3943_bfcb_409b_a915_bacfd95d9c93.slice/crio-c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e WatchSource:0}: Error finding container c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e: Status 404 returned error can't find the container with id c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.086709 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189683 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189824 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.189860 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.228668 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:15:26 crc kubenswrapper[4829]: W0217 16:15:26.235425 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18c52f3_efc1_4a9b_a7b0_b19bc419dd4d.slice/crio-aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb WatchSource:0}: Error finding container aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb: Status 404 returned error can't find the container with id aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292494 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292526 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292600 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292682 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.292726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293426 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-generated\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293621 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kolla-config\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.293711 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-config-data-default\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.294485 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903a9538-3e9d-4567-a9c2-0eeaaf450b85-operator-scripts\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.299078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304775 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304804 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb65fd8172e557afa0bcf95dbc3a5ab3334f442ae8b5643b4c42d5eeefe12cd5/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.304946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a9538-3e9d-4567-a9c2-0eeaaf450b85-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.318127 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc96l\" (UniqueName: \"kubernetes.io/projected/903a9538-3e9d-4567-a9c2-0eeaaf450b85-kube-api-access-kc96l\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.351673 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c54b306-6a99-4759-8c5c-9ea7a6b1b6f3\") pod \"openstack-galera-0\" (UID: \"903a9538-3e9d-4567-a9c2-0eeaaf450b85\") " pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.643287 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.916982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.918661 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.934295 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc"} Feb 17 16:15:26 crc kubenswrapper[4829]: I0217 16:15:26.936445 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e"} Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.223156 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: W0217 16:15:27.311827 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903a9538_3e9d_4567_a9c2_0eeaaf450b85.slice/crio-99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883 WatchSource:0}: Error finding container 99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883: Status 404 returned error can't find the container with id 99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883 Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.368406 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.371652 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.379780 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.379982 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.380173 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.380945 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9mdf7" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.388563 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.516501 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.517844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.519959 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.520223 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-z9ct4" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.520373 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535388 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535498 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535531 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535550 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535596 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535617 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535657 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.535687 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.541435 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637392 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637433 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637480 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637524 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637584 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637608 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637636 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637653 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637682 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637764 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.637791 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.639204 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.639543 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.640078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.643415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.644171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.648672 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3949cc3c-e03d-42b7-b07f-dbdce94d7283-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.649108 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.649141 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7206d36e835ecb5f541b54a5de40bbe7e6392727d9a7c454e3983214fdd1c801/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.656631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qts8\" (UniqueName: \"kubernetes.io/projected/3949cc3c-e03d-42b7-b07f-dbdce94d7283-kube-api-access-6qts8\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.706205 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee198a61-08f6-4572-91dc-83fb824b484c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ee198a61-08f6-4572-91dc-83fb824b484c\") pod \"openstack-cell1-galera-0\" (UID: \"3949cc3c-e03d-42b7-b07f-dbdce94d7283\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.712336 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738920 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.738979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739031 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-kolla-config\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.739835 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e3198cb-0642-46be-a9e3-33db29446377-config-data\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.748180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.756795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e3198cb-0642-46be-a9e3-33db29446377-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.757162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4cf\" (UniqueName: \"kubernetes.io/projected/4e3198cb-0642-46be-a9e3-33db29446377-kube-api-access-rm4cf\") pod \"memcached-0\" (UID: \"4e3198cb-0642-46be-a9e3-33db29446377\") " pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.849159 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:15:27 crc kubenswrapper[4829]: I0217 16:15:27.985407 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"99c0a522272fca2f93c605caeb76b5df9c93e6f2f44c8424bc3ed3eb280ac883"} Feb 17 16:15:28 crc kubenswrapper[4829]: I0217 16:15:28.574281 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:15:28 crc kubenswrapper[4829]: I0217 16:15:28.726806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.079700 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"5a2e8b048098164d9ed25ec98a771c68bee3c41abe41b76c8e5e8b0a15f1ff46"} Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.976765 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.978656 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:15:29 crc kubenswrapper[4829]: I0217 16:15:29.981408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zktxq" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.016419 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.100782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.203181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.236223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"kube-state-metrics-0\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.322980 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.657132 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.658686 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.660356 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-fp6pv" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.660871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.679066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.720861 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.720910 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.822879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.822923 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:30 crc kubenswrapper[4829]: E0217 16:15:30.823105 4829 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 17 16:15:30 crc kubenswrapper[4829]: E0217 16:15:30.823159 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert podName:54f57142-2ddb-4c2f-a68e-ab77ff965e8c nodeName:}" failed. No retries permitted until 2026-02-17 16:15:31.323140957 +0000 UTC m=+1243.740158935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert") pod "observability-ui-dashboards-66cbf594b5-vtctx" (UID: "54f57142-2ddb-4c2f-a68e-ab77ff965e8c") : secret "observability-ui-dashboards" not found Feb 17 16:15:30 crc kubenswrapper[4829]: I0217 16:15:30.857626 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjw6\" (UniqueName: \"kubernetes.io/projected/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-kube-api-access-fhjw6\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.012938 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.019172 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.027403 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159288 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159334 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159364 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159394 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.159818 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.175211 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.177801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.195405 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.195677 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.198624 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.198984 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.199260 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vxmz6" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.202037 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.214307 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.247779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.255632 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.270668 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.271010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.271127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272195 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272611 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272631 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272815 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.272839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273123 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273221 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273238 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273532 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273643 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273726 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.273773 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.277466 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-service-ca\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.277784 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-oauth-config\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.279833 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-trusted-ca-bundle\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.284842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c076d16-b8e7-4cec-a826-0bfde37276e5-console-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.292376 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcnw\" (UniqueName: \"kubernetes.io/projected/7c076d16-b8e7-4cec-a826-0bfde37276e5-kube-api-access-kmcnw\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.303990 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c076d16-b8e7-4cec-a826-0bfde37276e5-oauth-serving-cert\") pod \"console-86d6749f5-rhzrt\" (UID: \"7c076d16-b8e7-4cec-a826-0bfde37276e5\") " pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.381488 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382461 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382528 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382556 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382640 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382681 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382812 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.382859 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.384146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.387708 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.389100 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.389586 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.391918 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.391956 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe3c2171ea8e537d787d3308fa5bc6f869ae05d2809df2c7eb9ceb73db78889d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392032 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392417 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54f57142-2ddb-4c2f-a68e-ab77ff965e8c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-vtctx\" (UID: \"54f57142-2ddb-4c2f-a68e-ab77ff965e8c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392449 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.392723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.395494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.397317 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.403272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.449674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.527687 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:15:31 crc kubenswrapper[4829]: I0217 16:15:31.580728 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.122967 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.124300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.127025 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6r5bm" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.127217 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.133970 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.141296 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.182716 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.185330 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.219196 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222033 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222341 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222433 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.222969 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223104 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223183 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223234 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223342 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223432 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223463 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.223525 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332302 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332383 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332476 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332536 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332649 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332693 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.332810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333006 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333051 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333309 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-log-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.333816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-etc-ovs\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.334453 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-log\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.334815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-lib\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/741f1fbb-0699-4bb0-b46e-6eaa47595170-var-run\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335437 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.335545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5adca8d-ac72-45d0-aa1c-3c453a78620e-var-run-ovn\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.336202 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/741f1fbb-0699-4bb0-b46e-6eaa47595170-scripts\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.336730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5adca8d-ac72-45d0-aa1c-3c453a78620e-scripts\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.338946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-ovn-controller-tls-certs\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.350253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5adca8d-ac72-45d0-aa1c-3c453a78620e-combined-ca-bundle\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.354223 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvfg\" (UniqueName: \"kubernetes.io/projected/e5adca8d-ac72-45d0-aa1c-3c453a78620e-kube-api-access-rrvfg\") pod \"ovn-controller-75gff\" (UID: \"e5adca8d-ac72-45d0-aa1c-3c453a78620e\") " pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.361601 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87b9l\" (UniqueName: \"kubernetes.io/projected/741f1fbb-0699-4bb0-b46e-6eaa47595170-kube-api-access-87b9l\") pod \"ovn-controller-ovs-kwz7l\" (UID: \"741f1fbb-0699-4bb0-b46e-6eaa47595170\") " pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.456136 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.517847 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.580676 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.590037 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.594668 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hdcf5" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612371 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612468 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.612613 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.621876 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639742 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639799 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639845 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.639930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741926 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.741973 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742060 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742082 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.742282 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.743234 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.743500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-config\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.744241 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.744261 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a42f6f73351298ff4826167c7f4d711c587190a4cbbca9131e27b0085e9331e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.746974 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.748171 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.754420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.761039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mlr\" (UniqueName: \"kubernetes.io/projected/2b04054b-6716-42c5-8e1b-d7eba2bcfe4c-kube-api-access-h9mlr\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.790717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-855dc353-89ac-4c3b-b795-97e934bf6ea2\") pod \"ovsdbserver-nb-0\" (UID: \"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:33 crc kubenswrapper[4829]: I0217 16:15:33.936271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:15:36 crc kubenswrapper[4829]: I0217 16:15:36.254401 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4e3198cb-0642-46be-a9e3-33db29446377","Type":"ContainerStarted","Data":"cb71d8e5ea1106b4ed46a413f2381d4a45026e16e4608e4fad10ecfcdbb05242"} Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.163873 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.166039 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168436 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168641 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-p52xq" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.168732 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.169688 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.186464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224492 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.224786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225002 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225208 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.225434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.327795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.328240 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.328958 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329165 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329329 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329452 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329527 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329610 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.329713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.330695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.331158 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.333145 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.333171 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb22d4f52b89ee248d8eb9b677cd90d33956744283eac5d5ab5898997f58e911/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.335727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.335811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.340224 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.343294 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbdc\" (UniqueName: \"kubernetes.io/projected/2eeefec2-2e41-4278-8c9d-889dbf5f51ea-kube-api-access-ggbdc\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.363810 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-33281598-1616-42de-8d51-b12f06a8ee93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-33281598-1616-42de-8d51-b12f06a8ee93\") pod \"ovsdbserver-sb-0\" (UID: \"2eeefec2-2e41-4278-8c9d-889dbf5f51ea\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:37 crc kubenswrapper[4829]: I0217 16:15:37.493727 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:15:45 crc kubenswrapper[4829]: I0217 16:15:45.960829 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kwz7l"] Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.140455 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.141628 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qts8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(3949cc3c-e03d-42b7-b07f-dbdce94d7283): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.142960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" Feb 17 16:15:49 crc kubenswrapper[4829]: E0217 16:15:49.401907 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" Feb 17 16:15:50 crc kubenswrapper[4829]: W0217 16:15:50.313096 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod741f1fbb_0699_4bb0_b46e_6eaa47595170.slice/crio-c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3 WatchSource:0}: Error finding container c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3: Status 404 returned error can't find the container with id c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3 Feb 17 16:15:50 crc kubenswrapper[4829]: I0217 16:15:50.410473 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"c275c37057f7b5aa113bdf92627f237ed389e8c760b5cb6942980a8f1ca43ce3"} Feb 17 16:15:52 crc kubenswrapper[4829]: I0217 16:15:52.424304 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:52 crc kubenswrapper[4829]: I0217 16:15:52.424682 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.642224 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.642917 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkw5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ftmfx_openstack(66112eb6-8e4a-4469-8cfd-825bf6b7563d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:57 crc kubenswrapper[4829]: E0217 16:15:57.644194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.030804 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.031013 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9wpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-drgmb_openstack(5c13771b-c220-4ce6-9d1c-3c76af499220): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:58 crc kubenswrapper[4829]: E0217 16:15:58.032291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.170899 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.171158 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zclf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4zwb8_openstack(8d5f50bb-1dbc-4661-91f3-66c29ea7430e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.172558 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" podUID="8d5f50bb-1dbc-4661-91f3-66c29ea7430e" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.275386 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.275866 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87xml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wffgx_openstack(ffccb67d-5096-4a51-adf3-4bf3739373ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.278757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" podUID="ffccb67d-5096-4a51-adf3-4bf3739373ea" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.492552 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" Feb 17 16:15:59 crc kubenswrapper[4829]: E0217 16:15:58.492897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.500140 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1"} Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.503213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4e3198cb-0642-46be-a9e3-33db29446377","Type":"ContainerStarted","Data":"045eb1b277710dc5c13050ac2f2f64bf44e697379d44f725d130160d951edb94"} Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.503372 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.555626 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.687420054 podStartE2EDuration="32.55560677s" podCreationTimestamp="2026-02-17 16:15:27 +0000 UTC" firstStartedPulling="2026-02-17 16:15:35.350710709 +0000 UTC m=+1247.767728687" lastFinishedPulling="2026-02-17 16:15:58.218897425 +0000 UTC m=+1270.635915403" observedRunningTime="2026-02-17 16:15:59.549025886 +0000 UTC m=+1271.966043864" watchObservedRunningTime="2026-02-17 16:15:59.55560677 +0000 UTC m=+1271.972624758" Feb 17 16:15:59 crc kubenswrapper[4829]: I0217 16:15:59.828048 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-86d6749f5-rhzrt"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.111952 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.121865 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.203210 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.211400 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.218776 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.227310 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.266909 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285038 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") pod \"ffccb67d-5096-4a51-adf3-4bf3739373ea\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285191 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285276 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") pod \"ffccb67d-5096-4a51-adf3-4bf3739373ea\" (UID: \"ffccb67d-5096-4a51-adf3-4bf3739373ea\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285489 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") pod \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\" (UID: \"8d5f50bb-1dbc-4661-91f3-66c29ea7430e\") " Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.285974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config" (OuterVolumeSpecName: "config") pod "ffccb67d-5096-4a51-adf3-4bf3739373ea" (UID: "ffccb67d-5096-4a51-adf3-4bf3739373ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config" (OuterVolumeSpecName: "config") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286630 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286650 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffccb67d-5096-4a51-adf3-4bf3739373ea-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.286660 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.292787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf" (OuterVolumeSpecName: "kube-api-access-4zclf") pod "8d5f50bb-1dbc-4661-91f3-66c29ea7430e" (UID: "8d5f50bb-1dbc-4661-91f3-66c29ea7430e"). InnerVolumeSpecName "kube-api-access-4zclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.312398 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml" (OuterVolumeSpecName: "kube-api-access-87xml") pod "ffccb67d-5096-4a51-adf3-4bf3739373ea" (UID: "ffccb67d-5096-4a51-adf3-4bf3739373ea"). InnerVolumeSpecName "kube-api-access-87xml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.388533 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zclf\" (UniqueName: \"kubernetes.io/projected/8d5f50bb-1dbc-4661-91f3-66c29ea7430e-kube-api-access-4zclf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.388804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87xml\" (UniqueName: \"kubernetes.io/projected/ffccb67d-5096-4a51-adf3-4bf3739373ea-kube-api-access-87xml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.417493 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.513208 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.518287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.521651 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" event={"ID":"8d5f50bb-1dbc-4661-91f3-66c29ea7430e","Type":"ContainerDied","Data":"e7c4359a6a86de75a2f21197c9258209e81a5ec6d1e0f7b03fc162a1d9d53e77"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.521746 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4zwb8" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.525684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.529172 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d6749f5-rhzrt" event={"ID":"7c076d16-b8e7-4cec-a826-0bfde37276e5","Type":"ContainerStarted","Data":"8fc2b09df95dd5088340580e5716206baf417ca9d4012c72846848e4f2514e5e"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.529220 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-86d6749f5-rhzrt" event={"ID":"7c076d16-b8e7-4cec-a826-0bfde37276e5","Type":"ContainerStarted","Data":"59bfed50ce8346db033c2aba1138b958c53b8ea108cbd1a9924a20ecc090d6ae"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.531143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" event={"ID":"ffccb67d-5096-4a51-adf3-4bf3739373ea","Type":"ContainerDied","Data":"cacd8eed3fb0b0769b53687fb7ee29d23d0b51c36a9b2e50197b211f45b0f9c2"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.531156 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wffgx" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.532984 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8"} Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.631742 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-86d6749f5-rhzrt" podStartSLOduration=30.631723492 podStartE2EDuration="30.631723492s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:00.624591084 +0000 UTC m=+1273.041609062" watchObservedRunningTime="2026-02-17 16:16:00.631723492 +0000 UTC m=+1273.048741470" Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.691638 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.726408 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4zwb8"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.768467 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:16:00 crc kubenswrapper[4829]: I0217 16:16:00.776007 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wffgx"] Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.392934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.393300 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.408551 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.553099 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-86d6749f5-rhzrt" Feb 17 16:16:01 crc kubenswrapper[4829]: I0217 16:16:01.630476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.901471 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f57142_2ddb_4c2f_a68e_ab77ff965e8c.slice/crio-b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499 WatchSource:0}: Error finding container b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499: Status 404 returned error can't find the container with id b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499 Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.904065 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177c70b9_7b56_48f4_abd1_4d7a9c86450a.slice/crio-7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462 WatchSource:0}: Error finding container 7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462: Status 404 returned error can't find the container with id 7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462 Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.906202 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2003bd16_d251_4004_9eca_9e47fb54e514.slice/crio-f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d WatchSource:0}: Error finding container f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d: Status 404 returned error can't find the container with id f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.907482 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eeefec2_2e41_4278_8c9d_889dbf5f51ea.slice/crio-d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e WatchSource:0}: Error finding container d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e: Status 404 returned error can't find the container with id d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e Feb 17 16:16:01 crc kubenswrapper[4829]: W0217 16:16:01.915002 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5adca8d_ac72_45d0_aa1c_3c453a78620e.slice/crio-4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a WatchSource:0}: Error finding container 4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a: Status 404 returned error can't find the container with id 4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.294481 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5f50bb-1dbc-4661-91f3-66c29ea7430e" path="/var/lib/kubelet/pods/8d5f50bb-1dbc-4661-91f3-66c29ea7430e/volumes" Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.295099 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffccb67d-5096-4a51-adf3-4bf3739373ea" path="/var/lib/kubelet/pods/ffccb67d-5096-4a51-adf3-4bf3739373ea/volumes" Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.593897 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" event={"ID":"54f57142-2ddb-4c2f-a68e-ab77ff965e8c","Type":"ContainerStarted","Data":"b56191ad01f8442a766f33f9f91d3c64be5c43fa209e18cee504832229e2a499"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.608826 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"c7f811cd14f674b453660b6ad7f81e29e6d3b47e489fe39baf0386ce0d424985"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.644164 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.687123 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"d83c4539cf7bb6359fbb034dbed8cca86206e61ad5b4e7cdaba93bb902bdb90e"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.695902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.700469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff" event={"ID":"e5adca8d-ac72-45d0-aa1c-3c453a78620e","Type":"ContainerStarted","Data":"4e31cf14e53b5e90553a3225ded179cd492f7efa7bbd895af8ea4a56b1bb0b9a"} Feb 17 16:16:02 crc kubenswrapper[4829]: I0217 16:16:02.701793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerStarted","Data":"f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d"} Feb 17 16:16:03 crc kubenswrapper[4829]: I0217 16:16:03.723862 4829 generic.go:334] "Generic (PLEG): container finished" podID="741f1fbb-0699-4bb0-b46e-6eaa47595170" containerID="a275304b94e13756beec5bc3ea22cea73943689fb08b990e770398e332fc4612" exitCode=0 Feb 17 16:16:03 crc kubenswrapper[4829]: I0217 16:16:03.725052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerDied","Data":"a275304b94e13756beec5bc3ea22cea73943689fb08b990e770398e332fc4612"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"fae7e9ae2e690bc53d5f8669f14902debc99d2cb7767aeb20a9cb98be3ae6c5c"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740770 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kwz7l" event={"ID":"741f1fbb-0699-4bb0-b46e-6eaa47595170","Type":"ContainerStarted","Data":"196402ba8f3339d4460e67b2683a659c1cfcc9f89c0daad7ca73a902d4481e49"} Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.740803 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:04 crc kubenswrapper[4829]: I0217 16:16:04.773513 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kwz7l" podStartSLOduration=19.777784754 podStartE2EDuration="31.773486455s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:15:50.317041411 +0000 UTC m=+1262.734059419" lastFinishedPulling="2026-02-17 16:16:02.312743122 +0000 UTC m=+1274.729761120" observedRunningTime="2026-02-17 16:16:04.764879958 +0000 UTC m=+1277.181897926" watchObservedRunningTime="2026-02-17 16:16:04.773486455 +0000 UTC m=+1277.190504463" Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.770702 4829 generic.go:334] "Generic (PLEG): container finished" podID="903a9538-3e9d-4567-a9c2-0eeaaf450b85" containerID="81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.770806 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerDied","Data":"81d86d99dd5ba4a469d8f918d10cd0ff5fb14f2b52d1536b8cab3c69b3637cd1"} Feb 17 16:16:07 crc kubenswrapper[4829]: I0217 16:16:07.850662 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:16:08 crc kubenswrapper[4829]: I0217 16:16:08.798924 4829 generic.go:334] "Generic (PLEG): container finished" podID="3949cc3c-e03d-42b7-b07f-dbdce94d7283" containerID="c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838" exitCode=0 Feb 17 16:16:08 crc kubenswrapper[4829]: I0217 16:16:08.799267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerDied","Data":"c496436ea899feb706f42039ca41671e923b0f8470a69f1ddaa37587ecc1e838"} Feb 17 16:16:09 crc kubenswrapper[4829]: I0217 16:16:09.827034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"cb5edfaa181cc07904d86d2889543a22d081fa9b236fe1a4c668e4099c504a68"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.253591 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.349842 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.351376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.374511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530658 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.530770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633669 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.633721 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.634635 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.634647 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.668536 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"dnsmasq-dns-7cb5889db5-v9m6d\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.677040 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.851025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" event={"ID":"54f57142-2ddb-4c2f-a68e-ab77ff965e8c","Type":"ContainerStarted","Data":"5377f3d7675e771e2cb33f5f0a44ee7e01cb5f9b6da4c0b82963d668146cbd22"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.856643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"fc9d9c4907c2d4bdac819738d0ec4a90fece0da9858b14ae4075e37451c348a4"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.874498 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-vtctx" podStartSLOduration=34.21217454 podStartE2EDuration="40.874479466s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.9045366 +0000 UTC m=+1274.321554578" lastFinishedPulling="2026-02-17 16:16:08.566841516 +0000 UTC m=+1280.983859504" observedRunningTime="2026-02-17 16:16:10.872552544 +0000 UTC m=+1283.289570522" watchObservedRunningTime="2026-02-17 16:16:10.874479466 +0000 UTC m=+1283.291497444" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.886035 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3949cc3c-e03d-42b7-b07f-dbdce94d7283","Type":"ContainerStarted","Data":"72c8454327a4b5d62205b47d208bfac90bd174e589327b1876678366558bee4e"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.893684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff" event={"ID":"e5adca8d-ac72-45d0-aa1c-3c453a78620e","Type":"ContainerStarted","Data":"849c857f0f4760afddff607fc710b47bec4447f8edd5991d57bc85a528a0c656"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.894445 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-75gff" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.906779 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"903a9538-3e9d-4567-a9c2-0eeaaf450b85","Type":"ContainerStarted","Data":"3deafe6d5bc9d86f658feddcf39a9e958fd27db1707b6c9b428025af0360eb98"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.921263 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371991.93353 podStartE2EDuration="44.92124423s" podCreationTimestamp="2026-02-17 16:15:26 +0000 UTC" firstStartedPulling="2026-02-17 16:15:28.595597493 +0000 UTC m=+1241.012615471" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:10.917918383 +0000 UTC m=+1283.334936361" watchObservedRunningTime="2026-02-17 16:16:10.92124423 +0000 UTC m=+1283.338262198" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.926914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerStarted","Data":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.927863 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.952341 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.071951726 podStartE2EDuration="46.952323821s" podCreationTimestamp="2026-02-17 16:15:24 +0000 UTC" firstStartedPulling="2026-02-17 16:15:27.327307354 +0000 UTC m=+1239.744325332" lastFinishedPulling="2026-02-17 16:15:58.207679449 +0000 UTC m=+1270.624697427" observedRunningTime="2026-02-17 16:16:10.948965683 +0000 UTC m=+1283.365983661" watchObservedRunningTime="2026-02-17 16:16:10.952323821 +0000 UTC m=+1283.369341799" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.974645 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-75gff" podStartSLOduration=31.599245272 podStartE2EDuration="37.97462873s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.921029616 +0000 UTC m=+1274.338047594" lastFinishedPulling="2026-02-17 16:16:08.296413064 +0000 UTC m=+1280.713431052" observedRunningTime="2026-02-17 16:16:10.968460188 +0000 UTC m=+1283.385478196" watchObservedRunningTime="2026-02-17 16:16:10.97462873 +0000 UTC m=+1283.391646708" Feb 17 16:16:10 crc kubenswrapper[4829]: I0217 16:16:10.981103 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.018993 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=34.625514235 podStartE2EDuration="42.018972102s" podCreationTimestamp="2026-02-17 16:15:29 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.908772342 +0000 UTC m=+1274.325790340" lastFinishedPulling="2026-02-17 16:16:09.302230229 +0000 UTC m=+1281.719248207" observedRunningTime="2026-02-17 16:16:10.984211714 +0000 UTC m=+1283.401229682" watchObservedRunningTime="2026-02-17 16:16:11.018972102 +0000 UTC m=+1283.435990080" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056443 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.056667 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") pod \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\" (UID: \"66112eb6-8e4a-4469-8cfd-825bf6b7563d\") " Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.064843 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config" (OuterVolumeSpecName: "config") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.069871 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g" (OuterVolumeSpecName: "kube-api-access-tkw5g") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "kube-api-access-tkw5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.072819 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66112eb6-8e4a-4469-8cfd-825bf6b7563d" (UID: "66112eb6-8e4a-4469-8cfd-825bf6b7563d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159798 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159845 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66112eb6-8e4a-4469-8cfd-825bf6b7563d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.159856 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkw5g\" (UniqueName: \"kubernetes.io/projected/66112eb6-8e4a-4469-8cfd-825bf6b7563d-kube-api-access-tkw5g\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.343619 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.518068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.531181 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.534408 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g2mnr" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535480 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.535381 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.538689 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.675440 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.675879 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676124 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.676248 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778238 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778293 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778334 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778349 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.778359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: E0217 16:16:11.778393 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:12.278375999 +0000 UTC m=+1284.695393977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.780541 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-lock\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.780549 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f22317f-8a58-4b93-b29f-a0e585ac48a9-cache\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.781952 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.781988 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8692e9ccbc74af749ec2fa3c25074da78e03b1b6bccd5192b74189beb87f97ff/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.797480 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22317f-8a58-4b93-b29f-a0e585ac48a9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.808288 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9sv\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-kube-api-access-8n9sv\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.830131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56c41997-89e2-4259-aa75-4421f591a101\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c41997-89e2-4259-aa75-4421f591a101\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.934551 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.934584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ftmfx" event={"ID":"66112eb6-8e4a-4469-8cfd-825bf6b7563d","Type":"ContainerDied","Data":"8080c80239a7cc32f4dd13b37dd157e1614f912071a195489dee7b9639b38f73"} Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.936114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerStarted","Data":"947f6f2b812825423fe5cd557b191cf1f236b7165f1fd81b546d6d944de340be"} Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.993780 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:11 crc kubenswrapper[4829]: I0217 16:16:11.998183 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ftmfx"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.045685 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.047106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.048822 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.052204 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.054034 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.068769 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191856 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191901 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191915 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.191986 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.192061 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.289842 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66112eb6-8e4a-4469-8cfd-825bf6b7563d" path="/var/lib/kubelet/pods/66112eb6-8e4a-4469-8cfd-825bf6b7563d/volumes" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293685 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293704 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293782 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.293895 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294349 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294362 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: E0217 16:16:12.294398 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:13.294385708 +0000 UTC m=+1285.711403686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.295985 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.302075 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.313973 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.316382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.318936 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"swift-ring-rebalance-84gsz\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:12 crc kubenswrapper[4829]: I0217 16:16:12.368921 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.005916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193"} Feb 17 16:16:13 crc kubenswrapper[4829]: W0217 16:16:13.260125 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b1a5c5_d463_48ba_b0d2_4409299812cb.slice/crio-b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c WatchSource:0}: Error finding container b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c: Status 404 returned error can't find the container with id b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.261077 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-84gsz"] Feb 17 16:16:13 crc kubenswrapper[4829]: I0217 16:16:13.316395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.316803 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.316964 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:13 crc kubenswrapper[4829]: E0217 16:16:13.317006 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:15.316990027 +0000 UTC m=+1287.734008005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.033702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2b04054b-6716-42c5-8e1b-d7eba2bcfe4c","Type":"ContainerStarted","Data":"7e48d01ba99cffe229b614825d1eba453a4d25596643f17e43caf244c19c0ec8"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.036783 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2eeefec2-2e41-4278-8c9d-889dbf5f51ea","Type":"ContainerStarted","Data":"daf6967fc59cb77ff9be84427251c2c6c0cba5c800832d1c51610616fbf7728e"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.040432 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c889225-ec15-48e6-a170-7b805954d7d6" containerID="91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b" exitCode=0 Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.040491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.043951 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerStarted","Data":"b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.047409 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerID="b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc" exitCode=0 Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.047509 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc"} Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.057638 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.161681284 podStartE2EDuration="42.057608438s" podCreationTimestamp="2026-02-17 16:15:32 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.92080163 +0000 UTC m=+1274.337819608" lastFinishedPulling="2026-02-17 16:16:12.816728784 +0000 UTC m=+1285.233746762" observedRunningTime="2026-02-17 16:16:14.054952168 +0000 UTC m=+1286.471970156" watchObservedRunningTime="2026-02-17 16:16:14.057608438 +0000 UTC m=+1286.474626416" Feb 17 16:16:14 crc kubenswrapper[4829]: I0217 16:16:14.125611 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.246051272 podStartE2EDuration="38.125590704s" podCreationTimestamp="2026-02-17 16:15:36 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.909262875 +0000 UTC m=+1274.326280853" lastFinishedPulling="2026-02-17 16:16:12.788802307 +0000 UTC m=+1285.205820285" observedRunningTime="2026-02-17 16:16:14.123022426 +0000 UTC m=+1286.540040404" watchObservedRunningTime="2026-02-17 16:16:14.125590704 +0000 UTC m=+1286.542608692" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.060423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerStarted","Data":"69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09"} Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.061154 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.064716 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerStarted","Data":"6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8"} Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.065290 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.097413 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podStartSLOduration=3.6580011470000002 podStartE2EDuration="52.097390212s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:24.400334536 +0000 UTC m=+1236.817352514" lastFinishedPulling="2026-02-17 16:16:12.839723601 +0000 UTC m=+1285.256741579" observedRunningTime="2026-02-17 16:16:15.083513495 +0000 UTC m=+1287.500531493" watchObservedRunningTime="2026-02-17 16:16:15.097390212 +0000 UTC m=+1287.514408200" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.108087 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" podStartSLOduration=3.754979717 podStartE2EDuration="5.108068604s" podCreationTimestamp="2026-02-17 16:16:10 +0000 UTC" firstStartedPulling="2026-02-17 16:16:11.464017127 +0000 UTC m=+1283.881035105" lastFinishedPulling="2026-02-17 16:16:12.817106014 +0000 UTC m=+1285.234123992" observedRunningTime="2026-02-17 16:16:15.104330585 +0000 UTC m=+1287.521348573" watchObservedRunningTime="2026-02-17 16:16:15.108068604 +0000 UTC m=+1287.525086582" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.402330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404142 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404185 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: E0217 16:16:15.404268 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:19.404236416 +0000 UTC m=+1291.821254434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.937539 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:15 crc kubenswrapper[4829]: I0217 16:16:15.992664 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.081712 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.130645 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.479313 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.494293 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.494918 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.496559 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.499986 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.524032 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.572796 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.613094 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.614817 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.619872 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.633965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634085 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634197 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.634261 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643893 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.643934 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.735874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.735997 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736096 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736132 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736154 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736254 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.736292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.737750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.770367 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"dnsmasq-dns-57d65f699f-crv29\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.822871 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838233 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838270 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.838327 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.839381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-config\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.840103 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovs-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.840156 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-ovn-rundir\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.845850 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.871778 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-combined-ca-bundle\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.883353 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.910322 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.912511 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.922723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.923674 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.939899 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.939983 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940027 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.940132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:16 crc kubenswrapper[4829]: I0217 16:16:16.950422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cn\" (UniqueName: \"kubernetes.io/projected/60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088-kube-api-access-h45cn\") pod \"ovn-controller-metrics-2hx8h\" (UID: \"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088\") " pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041716 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041740 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.041827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042856 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.042856 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.063077 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.068005 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-tz7z4\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.096346 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" containerID="cri-o://6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" gracePeriod=10 Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.096659 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.097703 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" containerID="cri-o://69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" gracePeriod=10 Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.138021 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.235114 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2hx8h" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.307128 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.308890 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.311872 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312070 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312184 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.312425 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-w5tr8" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.337119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.344187 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346107 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346616 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346701 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.346788 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448759 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448849 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.448918 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.449941 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-scripts\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.450290 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add70c30-2098-4686-bd7d-f693219a63b8-config\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.450910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/add70c30-2098-4686-bd7d-f693219a63b8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.454259 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.455164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.455755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add70c30-2098-4686-bd7d-f693219a63b8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.464123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcskr\" (UniqueName: \"kubernetes.io/projected/add70c30-2098-4686-bd7d-f693219a63b8-kube-api-access-tcskr\") pod \"ovn-northd-0\" (UID: \"add70c30-2098-4686-bd7d-f693219a63b8\") " pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.632730 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.714108 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.714494 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:17 crc kubenswrapper[4829]: I0217 16:16:17.962442 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.086423 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.143311 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.144455 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.144585 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="903a9538-3e9d-4567-a9c2-0eeaaf450b85" containerName="galera" probeResult="failure" output=< Feb 17 16:16:18 crc kubenswrapper[4829]: wsrep_local_state_comment (Joined) differs from Synced Feb 17 16:16:18 crc kubenswrapper[4829]: > Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.170212 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c889225-ec15-48e6-a170-7b805954d7d6" containerID="6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.170293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.175871 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerID="69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" exitCode=0 Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.176919 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09"} Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.341060 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.528005 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594503 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.594543 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") pod \"5c13771b-c220-4ce6-9d1c-3c76af499220\" (UID: \"5c13771b-c220-4ce6-9d1c-3c76af499220\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.599712 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw" (OuterVolumeSpecName: "kube-api-access-g9wpw") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "kube-api-access-g9wpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.640829 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.670524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config" (OuterVolumeSpecName: "config") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.673087 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c13771b-c220-4ce6-9d1c-3c76af499220" (UID: "5c13771b-c220-4ce6-9d1c-3c76af499220"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.701896 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702059 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702121 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") pod \"5c889225-ec15-48e6-a170-7b805954d7d6\" (UID: \"5c889225-ec15-48e6-a170-7b805954d7d6\") " Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702796 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702814 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9wpw\" (UniqueName: \"kubernetes.io/projected/5c13771b-c220-4ce6-9d1c-3c76af499220-kube-api-access-g9wpw\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.702827 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c13771b-c220-4ce6-9d1c-3c76af499220-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.705119 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4" (OuterVolumeSpecName: "kube-api-access-qzcx4") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "kube-api-access-qzcx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.747458 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.800889 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config" (OuterVolumeSpecName: "config") pod "5c889225-ec15-48e6-a170-7b805954d7d6" (UID: "5c889225-ec15-48e6-a170-7b805954d7d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804833 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804861 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c889225-ec15-48e6-a170-7b805954d7d6-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.804871 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzcx4\" (UniqueName: \"kubernetes.io/projected/5c889225-ec15-48e6-a170-7b805954d7d6-kube-api-access-qzcx4\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.962341 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2hx8h"] Feb 17 16:16:18 crc kubenswrapper[4829]: W0217 16:16:18.969609 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda954ada0_6e54_469b_a010_3da22abd6a61.slice/crio-db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec WatchSource:0}: Error finding container db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec: Status 404 returned error can't find the container with id db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.976029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:16:18 crc kubenswrapper[4829]: I0217 16:16:18.993343 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.099481 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:16:19 crc kubenswrapper[4829]: W0217 16:16:19.110922 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadd70c30_2098_4686_bd7d_f693219a63b8.slice/crio-7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54 WatchSource:0}: Error finding container 7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54: Status 404 returned error can't find the container with id 7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54 Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.200342 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerStarted","Data":"5400a25da3cf9813f2738c87bdee6d972d3e819ee60aec5081f361efad50e947"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" event={"ID":"5c889225-ec15-48e6-a170-7b805954d7d6","Type":"ContainerDied","Data":"947f6f2b812825423fe5cd557b191cf1f236b7165f1fd81b546d6d944de340be"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206517 4829 scope.go:117] "RemoveContainer" containerID="6d7ee61357ea6c276b81bbbd10aaabc167dfa38b40827acd3dec25803b5d31b8" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.206684 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-v9m6d" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.219817 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerStarted","Data":"db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.226010 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerStarted","Data":"c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.229335 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2hx8h" event={"ID":"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088","Type":"ContainerStarted","Data":"3a840ae80e771944bfbb62dfe84d04e5d55e6a640be0ff7bec0de168e1adfa6a"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.237584 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"7e22d0b433c4f678e5d7c80e162bf1f0e5daf3ed4cb26d281f74cc98a00e8b54"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.246722 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.251598 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drgmb" event={"ID":"5c13771b-c220-4ce6-9d1c-3c76af499220","Type":"ContainerDied","Data":"ce5063a6f738ea04952eb657c9ffcd22a12ece972f639f6963c8931135871a1a"} Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.259971 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-84gsz" podStartSLOduration=2.248834879 podStartE2EDuration="7.259958974s" podCreationTimestamp="2026-02-17 16:16:12 +0000 UTC" firstStartedPulling="2026-02-17 16:16:13.264511561 +0000 UTC m=+1285.681529539" lastFinishedPulling="2026-02-17 16:16:18.275635656 +0000 UTC m=+1290.692653634" observedRunningTime="2026-02-17 16:16:19.250002351 +0000 UTC m=+1291.667020329" watchObservedRunningTime="2026-02-17 16:16:19.259958974 +0000 UTC m=+1291.676976952" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.281685 4829 scope.go:117] "RemoveContainer" containerID="91dedcacdf3f05572ee33da7f992d47b93f5683121a065cabc05011fa57ae32b" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.292656 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.301425 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-v9m6d"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.317564 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.337062 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drgmb"] Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.338220 4829 scope.go:117] "RemoveContainer" containerID="69f60059422d2c59a1ff3786c155b32e48c90830b6cd19c8c256344844c94d09" Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.422132 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422362 4829 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422396 4829 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: E0217 16:16:19.422451 4829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift podName:5f22317f-8a58-4b93-b29f-a0e585ac48a9 nodeName:}" failed. No retries permitted until 2026-02-17 16:16:27.422432865 +0000 UTC m=+1299.839450843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift") pod "swift-storage-0" (UID: "5f22317f-8a58-4b93-b29f-a0e585ac48a9") : configmap "swift-ring-files" not found Feb 17 16:16:19 crc kubenswrapper[4829]: I0217 16:16:19.454977 4829 scope.go:117] "RemoveContainer" containerID="b7276676806889edf977e0daedb8572cce40b6cfb3544d2aa0b568e364ed37cc" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.265606 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2hx8h" event={"ID":"60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088","Type":"ContainerStarted","Data":"c24ed5c8ce90d9e70304c01fe433c136dc0088914cf7b57c6ccb091e1bd6358c"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.270031 4829 generic.go:334] "Generic (PLEG): container finished" podID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerID="b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75" exitCode=0 Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.270101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.273514 4829 generic.go:334] "Generic (PLEG): container finished" podID="a954ada0-6e54-469b-a010-3da22abd6a61" containerID="d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531" exitCode=0 Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.275410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531"} Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.330905 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2hx8h" podStartSLOduration=4.330885259 podStartE2EDuration="4.330885259s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:20.28925425 +0000 UTC m=+1292.706272248" watchObservedRunningTime="2026-02-17 16:16:20.330885259 +0000 UTC m=+1292.747903237" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.354235 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" path="/var/lib/kubelet/pods/5c13771b-c220-4ce6-9d1c-3c76af499220/volumes" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.378982 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" path="/var/lib/kubelet/pods/5c889225-ec15-48e6-a170-7b805954d7d6/volumes" Feb 17 16:16:20 crc kubenswrapper[4829]: I0217 16:16:20.401100 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.300468 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerStarted","Data":"4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.300947 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.305138 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"91b4f713a9268ff8e01a4b943596ef88edd8ba7c1d7786c169d974b4e2b70fa8"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.310686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerStarted","Data":"90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3"} Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.310721 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.323329 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" podStartSLOduration=5.323312062 podStartE2EDuration="5.323312062s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:21.316235914 +0000 UTC m=+1293.733253892" watchObservedRunningTime="2026-02-17 16:16:21.323312062 +0000 UTC m=+1293.740330040" Feb 17 16:16:21 crc kubenswrapper[4829]: I0217 16:16:21.335949 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-crv29" podStartSLOduration=5.3359341350000005 podStartE2EDuration="5.335934135s" podCreationTimestamp="2026-02-17 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:21.33009207 +0000 UTC m=+1293.747110068" watchObservedRunningTime="2026-02-17 16:16:21.335934135 +0000 UTC m=+1293.752952113" Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.321150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"add70c30-2098-4686-bd7d-f693219a63b8","Type":"ContainerStarted","Data":"8bcdeb124d89b2dd03d667081e587ed828cc755f2914df8608da1d4404833615"} Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.348512 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.564341916 podStartE2EDuration="5.348489189s" podCreationTimestamp="2026-02-17 16:16:17 +0000 UTC" firstStartedPulling="2026-02-17 16:16:19.114779649 +0000 UTC m=+1291.531797627" lastFinishedPulling="2026-02-17 16:16:20.898926922 +0000 UTC m=+1293.315944900" observedRunningTime="2026-02-17 16:16:22.346933618 +0000 UTC m=+1294.763951596" watchObservedRunningTime="2026-02-17 16:16:22.348489189 +0000 UTC m=+1294.765507187" Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.425103 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:22 crc kubenswrapper[4829]: I0217 16:16:22.425156 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:23 crc kubenswrapper[4829]: I0217 16:16:23.328417 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.464661 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465587 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465604 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465622 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465629 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465653 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465661 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: E0217 16:16:26.465675 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465682 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="init" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465937 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c889225-ec15-48e6-a170-7b805954d7d6" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.465969 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c13771b-c220-4ce6-9d1c-3c76af499220" containerName="dnsmasq-dns" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.466822 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.469656 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.483869 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.607174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.607568 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.678724 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-864565556d-824bj" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" containerID="cri-o://76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" gracePeriod=15 Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.709433 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.709651 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.710727 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.729795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"root-account-create-update-vkzf7\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.762227 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.797698 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:26 crc kubenswrapper[4829]: I0217 16:16:26.826722 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.338855 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373437 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373488 4829 generic.go:334] "Generic (PLEG): container finished" podID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerID="76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" exitCode=2 Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.373520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerDied","Data":"76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07"} Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.409787 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.410064 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-crv29" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" containerID="cri-o://90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" gracePeriod=10 Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.433141 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.459749 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f22317f-8a58-4b93-b29f-a0e585ac48a9-etc-swift\") pod \"swift-storage-0\" (UID: \"5f22317f-8a58-4b93-b29f-a0e585ac48a9\") " pod="openstack/swift-storage-0" Feb 17 16:16:27 crc kubenswrapper[4829]: I0217 16:16:27.777772 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.052756 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.054541 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.063620 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.152985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.153061 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.156769 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.168798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.168912 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.176205 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255697 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255896 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255921 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.255945 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.257381 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.267104 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.277272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"glance-db-create-l4jl2\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.357408 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.357484 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.358146 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.375044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"glance-8f32-account-create-update-gv4hc\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.378402 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.385210 4829 generic.go:334] "Generic (PLEG): container finished" podID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerID="90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.385276 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.386398 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerStarted","Data":"7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.387788 4829 generic.go:334] "Generic (PLEG): container finished" podID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerID="c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.387819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerDied","Data":"c361e277c5f5671172995fa6ff61b0749f494474617e5f961e94a0f2f1f86248"} Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.488390 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:28 crc kubenswrapper[4829]: I0217 16:16:28.577496 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:16:28 crc kubenswrapper[4829]: W0217 16:16:28.643093 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f22317f_8a58_4b93_b29f_a0e585ac48a9.slice/crio-860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56 WatchSource:0}: Error finding container 860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56: Status 404 returned error can't find the container with id 860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56 Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.053759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.070871 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.072217 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.080066 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.188394 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.195952 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.196846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.219810 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.221941 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.224462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.239334 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.267293 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.269149 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.270407 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.302341 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.303830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.303655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320448 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.320682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"keystone-db-create-ltmz7\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.377895 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:29 crc kubenswrapper[4829]: E0217 16:16:29.378311 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378624 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" containerName="console" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.378815 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.379475 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.386975 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.398217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"860bc7acee02a347733a5abd872b9df912ba0cd0fe2a5daaf081f0ba2b4f2f56"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-crv29" event={"ID":"ed89f1d3-16f2-4e67-82d5-aed34c03792c","Type":"ContainerDied","Data":"5400a25da3cf9813f2738c87bdee6d972d3e819ee60aec5081f361efad50e947"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406289 4829 scope.go:117] "RemoveContainer" containerID="90c8e544ba495089a2e81002366bad6a88e80a0eae60c6364827f6c03909f7e3" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.406414 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-crv29" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414422 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414662 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.414906 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.415103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.422856 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerStarted","Data":"083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.426373 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.427626 4829 generic.go:334] "Generic (PLEG): container finished" podID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerID="60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801" exitCode=0 Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.427701 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerDied","Data":"60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.430958 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-864565556d-824bj_cc453fb9-9d54-4441-bcae-64e34e837dac/console/0.log" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.431114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-864565556d-824bj" event={"ID":"cc453fb9-9d54-4441-bcae-64e34e837dac","Type":"ContainerDied","Data":"1fab21d3b2411b430b712a07fa69d09c6538c393be775a11148627e6607e17a7"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.431251 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-864565556d-824bj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.434105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerStarted","Data":"f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef"} Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.470910 4829 scope.go:117] "RemoveContainer" containerID="b836ce6c959b6af033259f03f8de94d7d175de3eb697329ee8fa11576f484d75" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523328 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523393 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523560 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523616 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523907 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523948 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") pod \"cc453fb9-9d54-4441-bcae-64e34e837dac\" (UID: \"cc453fb9-9d54-4441-bcae-64e34e837dac\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.523984 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524292 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca" (OuterVolumeSpecName: "service-ca") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524306 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.524794 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525123 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525183 4829 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525178 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.525840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526203 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config" (OuterVolumeSpecName: "console-config") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.526878 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.534364 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.538344 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5" (OuterVolumeSpecName: "kube-api-access-sxld5") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "kube-api-access-sxld5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.542735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf" (OuterVolumeSpecName: "kube-api-access-gvfqf") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "kube-api-access-gvfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.545553 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cc453fb9-9d54-4441-bcae-64e34e837dac" (UID: "cc453fb9-9d54-4441-bcae-64e34e837dac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.548390 4829 scope.go:117] "RemoveContainer" containerID="76dba13ab717d7cbc76fdd3b8a201ba079c0b1ff4cd8b413c9489df038019d07" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.553875 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"keystone-c7bc-account-create-update-zd552\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.554192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"placement-db-create-vnwrj\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.616357 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.622630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626455 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config" (OuterVolumeSpecName: "config") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") pod \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\" (UID: \"ed89f1d3-16f2-4e67-82d5-aed34c03792c\") " Feb 17 16:16:29 crc kubenswrapper[4829]: W0217 16:16:29.626863 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ed89f1d3-16f2-4e67-82d5-aed34c03792c/volumes/kubernetes.io~configmap/config Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626884 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config" (OuterVolumeSpecName: "config") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.626993 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627249 4829 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627260 4829 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627271 4829 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627280 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627290 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627298 4829 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc453fb9-9d54-4441-bcae-64e34e837dac-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627307 4829 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc453fb9-9d54-4441-bcae-64e34e837dac-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627316 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxld5\" (UniqueName: \"kubernetes.io/projected/ed89f1d3-16f2-4e67-82d5-aed34c03792c-kube-api-access-sxld5\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627324 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvfqf\" (UniqueName: \"kubernetes.io/projected/cc453fb9-9d54-4441-bcae-64e34e837dac-kube-api-access-gvfqf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.627907 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.650168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"placement-f99f-account-create-update-7rvdj\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.663161 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed89f1d3-16f2-4e67-82d5-aed34c03792c" (UID: "ed89f1d3-16f2-4e67-82d5-aed34c03792c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.727740 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.728897 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed89f1d3-16f2-4e67-82d5-aed34c03792c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.760440 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.761600 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.768076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.788064 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-crv29"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.814226 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:29 crc kubenswrapper[4829]: I0217 16:16:29.834272 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-864565556d-824bj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.032409 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.137869 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138049 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138107 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138167 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138223 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138344 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.138477 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") pod \"81b1a5c5-d463-48ba-b0d2-4409299812cb\" (UID: \"81b1a5c5-d463-48ba-b0d2-4409299812cb\") " Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.139285 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.142609 4829 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.144605 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.146188 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r" (OuterVolumeSpecName: "kube-api-access-mq87r") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "kube-api-access-mq87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.149502 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.179354 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts" (OuterVolumeSpecName: "scripts") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.184111 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.187306 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "81b1a5c5-d463-48ba-b0d2-4409299812cb" (UID: "81b1a5c5-d463-48ba-b0d2-4409299812cb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.190110 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191831 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191859 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191875 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="init" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191882 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="init" Feb 17 16:16:30 crc kubenswrapper[4829]: E0217 16:16:30.191908 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.191915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192121 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" containerName="dnsmasq-dns" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192139 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b1a5c5-d463-48ba-b0d2-4409299812cb" containerName="swift-ring-rebalance" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.192948 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.206163 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.237631 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245592 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245652 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245736 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq87r\" (UniqueName: \"kubernetes.io/projected/81b1a5c5-d463-48ba-b0d2-4409299812cb-kube-api-access-mq87r\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245749 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245762 4829 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245772 4829 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81b1a5c5-d463-48ba-b0d2-4409299812cb-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245780 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b1a5c5-d463-48ba-b0d2-4409299812cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.245788 4829 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81b1a5c5-d463-48ba-b0d2-4409299812cb-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.349805 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.350123 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.350959 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.371044 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc453fb9-9d54-4441-bcae-64e34e837dac" path="/var/lib/kubelet/pods/cc453fb9-9d54-4441-bcae-64e34e837dac/volumes" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.371815 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed89f1d3-16f2-4e67-82d5-aed34c03792c" path="/var/lib/kubelet/pods/ed89f1d3-16f2-4e67-82d5-aed34c03792c/volumes" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.378216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"mysqld-exporter-openstack-db-create-tdv6p\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.381781 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.384124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.387808 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.395039 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.486079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.486306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.511852 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-84gsz" event={"ID":"81b1a5c5-d463-48ba-b0d2-4409299812cb","Type":"ContainerDied","Data":"b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.511893 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b32e705570ddd99f4efce14daaf04a9f1a1723361aec4f45db4664da3e84c52c" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.512002 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-84gsz" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.513857 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.515616 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerStarted","Data":"f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.528470 4829 generic.go:334] "Generic (PLEG): container finished" podID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerID="e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c" exitCode=0 Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.528589 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerDied","Data":"e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.541253 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerStarted","Data":"97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.545234 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f"} Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.570711 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8f32-account-create-update-gv4hc" podStartSLOduration=2.57069301 podStartE2EDuration="2.57069301s" podCreationTimestamp="2026-02-17 16:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:30.563221599 +0000 UTC m=+1302.980239577" watchObservedRunningTime="2026-02-17 16:16:30.57069301 +0000 UTC m=+1302.987710988" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.588638 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.588713 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.589483 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.606888 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"mysqld-exporter-bf88-account-create-update-tfddd\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.656821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.667391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.681401 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:16:30 crc kubenswrapper[4829]: I0217 16:16:30.734635 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:30 crc kubenswrapper[4829]: W0217 16:16:30.877144 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406819b6_b859_4d4d_93ee_43180f5981bf.slice/crio-0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96 WatchSource:0}: Error finding container 0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96: Status 404 returned error can't find the container with id 0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96 Feb 17 16:16:30 crc kubenswrapper[4829]: W0217 16:16:30.883528 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea266eaa_6bce_499f_9891_ca9ec670e465.slice/crio-d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1 WatchSource:0}: Error finding container d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1: Status 404 returned error can't find the container with id d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.333473 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.405765 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.409295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") pod \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.409509 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") pod \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\" (UID: \"5973a92c-8e88-4f62-b9ce-5c28e57ced0a\") " Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.411113 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5973a92c-8e88-4f62-b9ce-5c28e57ced0a" (UID: "5973a92c-8e88-4f62-b9ce-5c28e57ced0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.418169 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx" (OuterVolumeSpecName: "kube-api-access-6k6jx") pod "5973a92c-8e88-4f62-b9ce-5c28e57ced0a" (UID: "5973a92c-8e88-4f62-b9ce-5c28e57ced0a"). InnerVolumeSpecName "kube-api-access-6k6jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.513402 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.513436 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k6jx\" (UniqueName: \"kubernetes.io/projected/5973a92c-8e88-4f62-b9ce-5c28e57ced0a-kube-api-access-6k6jx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.524759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:16:31 crc kubenswrapper[4829]: W0217 16:16:31.533638 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03006c3_35b5_45e5_9b9f_578a8eabbf22.slice/crio-da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70 WatchSource:0}: Error finding container da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70: Status 404 returned error can't find the container with id da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.558941 4829 generic.go:334] "Generic (PLEG): container finished" podID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerID="97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.559010 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerDied","Data":"97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.561449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerStarted","Data":"02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.563127 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerStarted","Data":"da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.564716 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerStarted","Data":"78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.568853 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerStarted","Data":"459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.568921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerStarted","Data":"d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.576018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"f8384053ab6137c27b9271267c4cccc647d9e2209f6bb04cce6b1f6a5db93eaa"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.579139 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerStarted","Data":"2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.579178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerStarted","Data":"0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.581870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vkzf7" event={"ID":"5973a92c-8e88-4f62-b9ce-5c28e57ced0a","Type":"ContainerDied","Data":"7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.581896 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae9cdc8dfc1c0b910afda072040e121765fb2f4f125509b4de35b288d6471cf" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.582048 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vkzf7" Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.584243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerStarted","Data":"17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.584274 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerStarted","Data":"7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1"} Feb 17 16:16:31 crc kubenswrapper[4829]: I0217 16:16:31.602721 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c7bc-account-create-update-zd552" podStartSLOduration=2.602696596 podStartE2EDuration="2.602696596s" podCreationTimestamp="2026-02-17 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:31.593400776 +0000 UTC m=+1304.010418754" watchObservedRunningTime="2026-02-17 16:16:31.602696596 +0000 UTC m=+1304.019714564" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.293524 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.335288 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") pod \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.335516 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") pod \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\" (UID: \"aaa06d20-74dd-41b6-822b-485fdf6cc6d5\") " Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.340275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd" (OuterVolumeSpecName: "kube-api-access-ft5pd") pod "aaa06d20-74dd-41b6-822b-485fdf6cc6d5" (UID: "aaa06d20-74dd-41b6-822b-485fdf6cc6d5"). InnerVolumeSpecName "kube-api-access-ft5pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.340468 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aaa06d20-74dd-41b6-822b-485fdf6cc6d5" (UID: "aaa06d20-74dd-41b6-822b-485fdf6cc6d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.437548 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.437605 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5pd\" (UniqueName: \"kubernetes.io/projected/aaa06d20-74dd-41b6-822b-485fdf6cc6d5-kube-api-access-ft5pd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.600658 4829 generic.go:334] "Generic (PLEG): container finished" podID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerID="17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.600976 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerDied","Data":"17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.604228 4829 generic.go:334] "Generic (PLEG): container finished" podID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerID="78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.604330 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerDied","Data":"78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.608190 4829 generic.go:334] "Generic (PLEG): container finished" podID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.608217 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612354 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l4jl2" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612448 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l4jl2" event={"ID":"aaa06d20-74dd-41b6-822b-485fdf6cc6d5","Type":"ContainerDied","Data":"f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.612528 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93fbecde54df28ddb2c82fb4e413c8a581f57e134ae95901320f13d6eb930ef" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.616718 4829 generic.go:334] "Generic (PLEG): container finished" podID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerID="6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.616822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.626244 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.626343 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.631171 4829 generic.go:334] "Generic (PLEG): container finished" podID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerID="b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.631364 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.635469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerStarted","Data":"718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1"} Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.807891 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f99f-account-create-update-7rvdj" podStartSLOduration=3.807874159 podStartE2EDuration="3.807874159s" podCreationTimestamp="2026-02-17 16:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:32.803353117 +0000 UTC m=+1305.220371105" watchObservedRunningTime="2026-02-17 16:16:32.807874159 +0000 UTC m=+1305.224892137" Feb 17 16:16:32 crc kubenswrapper[4829]: I0217 16:16:32.860886 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" podStartSLOduration=2.860857069 podStartE2EDuration="2.860857069s" podCreationTimestamp="2026-02-17 16:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:32.859954955 +0000 UTC m=+1305.276972933" watchObservedRunningTime="2026-02-17 16:16:32.860857069 +0000 UTC m=+1305.277875047" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.345877 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.465593 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") pod \"91c18e73-013c-4a4d-a4cc-922f43fccf45\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.465690 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") pod \"91c18e73-013c-4a4d-a4cc-922f43fccf45\" (UID: \"91c18e73-013c-4a4d-a4cc-922f43fccf45\") " Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.466753 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91c18e73-013c-4a4d-a4cc-922f43fccf45" (UID: "91c18e73-013c-4a4d-a4cc-922f43fccf45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.469424 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b" (OuterVolumeSpecName: "kube-api-access-hf85b") pod "91c18e73-013c-4a4d-a4cc-922f43fccf45" (UID: "91c18e73-013c-4a4d-a4cc-922f43fccf45"). InnerVolumeSpecName "kube-api-access-hf85b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.567961 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf85b\" (UniqueName: \"kubernetes.io/projected/91c18e73-013c-4a4d-a4cc-922f43fccf45-kube-api-access-hf85b\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.568198 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c18e73-013c-4a4d-a4cc-922f43fccf45-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654198 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8f32-account-create-update-gv4hc" Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8f32-account-create-update-gv4hc" event={"ID":"91c18e73-013c-4a4d-a4cc-922f43fccf45","Type":"ContainerDied","Data":"083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6"} Feb 17 16:16:33 crc kubenswrapper[4829]: I0217 16:16:33.654749 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083a84fd9f73860d681bbc5f140647a413d4ea0a9ec7cc8bd63d0926e4172bb6" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.305970 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.311847 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386635 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") pod \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386856 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") pod \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\" (UID: \"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386909 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") pod \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.386939 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") pod \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\" (UID: \"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d\") " Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387065 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" (UID: "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387383 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.387466 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" (UID: "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.391537 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567" (OuterVolumeSpecName: "kube-api-access-sb567") pod "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" (UID: "9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef"). InnerVolumeSpecName "kube-api-access-sb567". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.392309 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh" (OuterVolumeSpecName: "kube-api-access-cc5hh") pod "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" (UID: "3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d"). InnerVolumeSpecName "kube-api-access-cc5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb567\" (UniqueName: \"kubernetes.io/projected/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef-kube-api-access-sb567\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489503 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc5hh\" (UniqueName: \"kubernetes.io/projected/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-kube-api-access-cc5hh\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.489515 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666337 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ltmz7" event={"ID":"3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d","Type":"ContainerDied","Data":"f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666387 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f52ad3d93d8806423af5926ec3fa28488e1905b42937650fe2fc8623d5d01916" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.666447 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ltmz7" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.673611 4829 generic.go:334] "Generic (PLEG): container finished" podID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerID="459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945" exitCode=0 Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.673770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerDied","Data":"459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679682 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vnwrj" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679715 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vnwrj" event={"ID":"9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef","Type":"ContainerDied","Data":"7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1"} Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.679755 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f08b408f3cb590f25ec598092b861571783233e80da160cee97af34465e38d1" Feb 17 16:16:34 crc kubenswrapper[4829]: I0217 16:16:34.682189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerStarted","Data":"50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26"} Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.022004 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.032940 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vkzf7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.107514 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108306 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.108403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108481 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.108560 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.108747 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.109030 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.110014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110131 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: E0217 16:16:35.110250 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110342 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.110975 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111379 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111515 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" containerName="mariadb-database-create" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111648 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.111731 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" containerName="mariadb-account-create-update" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.113308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.116532 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.121863 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.204942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.205034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.307097 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.307312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.308604 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.325405 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"root-account-create-update-mxqd7\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.434840 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.692524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"e85a27ef9b0c20e651ae3c51098f9a9be196db23f0c032d53e7793658c1483ab"} Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.697688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7"} Feb 17 16:16:35 crc kubenswrapper[4829]: W0217 16:16:35.932921 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd81de6_80f5_4245_9f19_c86c9ffc125d.slice/crio-4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f WatchSource:0}: Error finding container 4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f: Status 404 returned error can't find the container with id 4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f Feb 17 16:16:35 crc kubenswrapper[4829]: I0217 16:16:35.943086 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.128869 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.229745 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") pod \"ea266eaa-6bce-499f-9891-ca9ec670e465\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.229827 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") pod \"ea266eaa-6bce-499f-9891-ca9ec670e465\" (UID: \"ea266eaa-6bce-499f-9891-ca9ec670e465\") " Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.230368 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea266eaa-6bce-499f-9891-ca9ec670e465" (UID: "ea266eaa-6bce-499f-9891-ca9ec670e465"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.230843 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea266eaa-6bce-499f-9891-ca9ec670e465-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.233929 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f" (OuterVolumeSpecName: "kube-api-access-2ls4f") pod "ea266eaa-6bce-499f-9891-ca9ec670e465" (UID: "ea266eaa-6bce-499f-9891-ca9ec670e465"). InnerVolumeSpecName "kube-api-access-2ls4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.291119 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5973a92c-8e88-4f62-b9ce-5c28e57ced0a" path="/var/lib/kubelet/pods/5973a92c-8e88-4f62-b9ce-5c28e57ced0a/volumes" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.332712 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ls4f\" (UniqueName: \"kubernetes.io/projected/ea266eaa-6bce-499f-9891-ca9ec670e465-kube-api-access-2ls4f\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720362 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f99f-account-create-update-7rvdj" event={"ID":"ea266eaa-6bce-499f-9891-ca9ec670e465","Type":"ContainerDied","Data":"d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720408 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f4677c3b37b23e2ca1b739b05d1e6923d398b4ed8676589f318124cece60b1" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.720490 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f99f-account-create-update-7rvdj" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.724836 4829 generic.go:334] "Generic (PLEG): container finished" podID="406819b6-b859-4d4d-93ee-43180f5981bf" containerID="2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0" exitCode=0 Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.724911 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerDied","Data":"2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.729248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"49c16e35c06436eeb8c73f4b8b2a68bc23fca33e16bdc7d064897a3e30e301c9"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.739114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerStarted","Data":"8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.739170 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerStarted","Data":"4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.748137 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerStarted","Data":"1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.748987 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.764393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerStarted","Data":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.765258 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.777989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerStarted","Data":"6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.778459 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.799524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerStarted","Data":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.800209 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.800981 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mxqd7" podStartSLOduration=1.80097006 podStartE2EDuration="1.80097006s" podCreationTimestamp="2026-02-17 16:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:36.777566178 +0000 UTC m=+1309.194584156" watchObservedRunningTime="2026-02-17 16:16:36.80097006 +0000 UTC m=+1309.217988038" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.815251 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.498253677 podStartE2EDuration="1m13.815232375s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:25.903657622 +0000 UTC m=+1238.320675600" lastFinishedPulling="2026-02-17 16:15:58.22063632 +0000 UTC m=+1270.637654298" observedRunningTime="2026-02-17 16:16:36.800227119 +0000 UTC m=+1309.217245097" watchObservedRunningTime="2026-02-17 16:16:36.815232375 +0000 UTC m=+1309.232250353" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.839111 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.907958911 podStartE2EDuration="1m13.839096419s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:26.23827607 +0000 UTC m=+1238.655294048" lastFinishedPulling="2026-02-17 16:15:58.169413578 +0000 UTC m=+1270.586431556" observedRunningTime="2026-02-17 16:16:36.837341262 +0000 UTC m=+1309.254359240" watchObservedRunningTime="2026-02-17 16:16:36.839096419 +0000 UTC m=+1309.256114397" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.863221 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" podStartSLOduration=6.86320714 podStartE2EDuration="6.86320714s" podCreationTimestamp="2026-02-17 16:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:36.855008769 +0000 UTC m=+1309.272026747" watchObservedRunningTime="2026-02-17 16:16:36.86320714 +0000 UTC m=+1309.280225118" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.885859 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=41.764842889 podStartE2EDuration="1m13.885844371s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:26.099339851 +0000 UTC m=+1238.516357829" lastFinishedPulling="2026-02-17 16:15:58.220341343 +0000 UTC m=+1270.637359311" observedRunningTime="2026-02-17 16:16:36.885296027 +0000 UTC m=+1309.302314005" watchObservedRunningTime="2026-02-17 16:16:36.885844371 +0000 UTC m=+1309.302862349" Feb 17 16:16:36 crc kubenswrapper[4829]: I0217 16:16:36.932817 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=41.494301532 podStartE2EDuration="1m13.93279998s" podCreationTimestamp="2026-02-17 16:15:23 +0000 UTC" firstStartedPulling="2026-02-17 16:15:25.889313214 +0000 UTC m=+1238.306331192" lastFinishedPulling="2026-02-17 16:15:58.327811662 +0000 UTC m=+1270.744829640" observedRunningTime="2026-02-17 16:16:36.916083178 +0000 UTC m=+1309.333101166" watchObservedRunningTime="2026-02-17 16:16:36.93279998 +0000 UTC m=+1309.349817958" Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.701223 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.809377 4829 generic.go:334] "Generic (PLEG): container finished" podID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerID="718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1" exitCode=0 Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.809464 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerDied","Data":"718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1"} Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.813195 4829 generic.go:334] "Generic (PLEG): container finished" podID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerID="50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26" exitCode=0 Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.813249 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerDied","Data":"50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26"} Feb 17 16:16:37 crc kubenswrapper[4829]: I0217 16:16:37.815989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"b29fbef8b292c4902f6f086484aeb803f7a4c29f2f87c33b7326d81889554552"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.350912 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.401616 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:38 crc kubenswrapper[4829]: E0217 16:16:38.402002 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402018 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: E0217 16:16:38.402057 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402064 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402232 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402254 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" containerName="mariadb-account-create-update" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.402906 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.406235 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.406373 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xbdvq" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.439803 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.492331 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") pod \"406819b6-b859-4d4d-93ee-43180f5981bf\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.492518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") pod \"406819b6-b859-4d4d-93ee-43180f5981bf\" (UID: \"406819b6-b859-4d4d-93ee-43180f5981bf\") " Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493100 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "406819b6-b859-4d4d-93ee-43180f5981bf" (UID: "406819b6-b859-4d4d-93ee-43180f5981bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493397 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.493656 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/406819b6-b859-4d4d-93ee-43180f5981bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.499223 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6" (OuterVolumeSpecName: "kube-api-access-lvvc6") pod "406819b6-b859-4d4d-93ee-43180f5981bf" (UID: "406819b6-b859-4d4d-93ee-43180f5981bf"). InnerVolumeSpecName "kube-api-access-lvvc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.568129 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.580096 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kwz7l" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595200 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595220 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595320 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.595386 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvvc6\" (UniqueName: \"kubernetes.io/projected/406819b6-b859-4d4d-93ee-43180f5981bf-kube-api-access-lvvc6\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.599799 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.602667 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.604481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.619772 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"glance-db-sync-9z4lf\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.745774 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.843210 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7bc-account-create-update-zd552" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.843219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7bc-account-create-update-zd552" event={"ID":"406819b6-b859-4d4d-93ee-43180f5981bf","Type":"ContainerDied","Data":"0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.844078 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a48c506b6de082e59def3878578dad02e29396995675f0cde7e8f0d61837f96" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.849863 4829 generic.go:334] "Generic (PLEG): container finished" podID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerID="8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3" exitCode=0 Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.850005 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerDied","Data":"8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3"} Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.858468 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.859851 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.866085 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:16:38 crc kubenswrapper[4829]: I0217 16:16:38.876973 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046669 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046877 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.046932 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152126 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152198 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.152794 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.154996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.161396 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.176015 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ovn-controller-75gff-config-xlnvr\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.247545 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.681915 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:16:39 crc kubenswrapper[4829]: W0217 16:16:39.842446 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode14bea24_3170_4bdb_8811_9a94d94ae4b7.slice/crio-a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e WatchSource:0}: Error finding container a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e: Status 404 returned error can't find the container with id a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.874462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" event={"ID":"e03006c3-35b5-45e5-9b9f-578a8eabbf22","Type":"ContainerDied","Data":"da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.874494 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8b34ba373e123bcd23a942af760fd256d115c59a63c9e56da03b2179403c70" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.875595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerStarted","Data":"a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.876928 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" event={"ID":"e50b4954-d1c6-451e-b8f4-3ba817c89c6b","Type":"ContainerDied","Data":"02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be"} Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.876953 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f544e4bf4d2d30ada866fe3ea0f7c521ec3ce982764ab285b7a2880bbf91be" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.948234 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.960839 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977649 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") pod \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977702 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") pod \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") pod \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\" (UID: \"e03006c3-35b5-45e5-9b9f-578a8eabbf22\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.977864 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") pod \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\" (UID: \"e50b4954-d1c6-451e-b8f4-3ba817c89c6b\") " Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.980296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e50b4954-d1c6-451e-b8f4-3ba817c89c6b" (UID: "e50b4954-d1c6-451e-b8f4-3ba817c89c6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:39 crc kubenswrapper[4829]: I0217 16:16:39.999437 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e03006c3-35b5-45e5-9b9f-578a8eabbf22" (UID: "e03006c3-35b5-45e5-9b9f-578a8eabbf22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.017180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb" (OuterVolumeSpecName: "kube-api-access-fvbwb") pod "e50b4954-d1c6-451e-b8f4-3ba817c89c6b" (UID: "e50b4954-d1c6-451e-b8f4-3ba817c89c6b"). InnerVolumeSpecName "kube-api-access-fvbwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.019608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x" (OuterVolumeSpecName: "kube-api-access-2ls2x") pod "e03006c3-35b5-45e5-9b9f-578a8eabbf22" (UID: "e03006c3-35b5-45e5-9b9f-578a8eabbf22"). InnerVolumeSpecName "kube-api-access-2ls2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081361 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbwb\" (UniqueName: \"kubernetes.io/projected/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-kube-api-access-fvbwb\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081410 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ls2x\" (UniqueName: \"kubernetes.io/projected/e03006c3-35b5-45e5-9b9f-578a8eabbf22-kube-api-access-2ls2x\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081420 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03006c3-35b5-45e5-9b9f-578a8eabbf22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.081428 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e50b4954-d1c6-451e-b8f4-3ba817c89c6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.887164 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bf88-account-create-update-tfddd" Feb 17 16:16:40 crc kubenswrapper[4829]: I0217 16:16:40.887178 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-tdv6p" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.482519 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.510216 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") pod \"abd81de6-80f5-4245-9f19-c86c9ffc125d\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.510295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") pod \"abd81de6-80f5-4245-9f19-c86c9ffc125d\" (UID: \"abd81de6-80f5-4245-9f19-c86c9ffc125d\") " Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.511432 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abd81de6-80f5-4245-9f19-c86c9ffc125d" (UID: "abd81de6-80f5-4245-9f19-c86c9ffc125d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.517486 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs" (OuterVolumeSpecName: "kube-api-access-s4gbs") pod "abd81de6-80f5-4245-9f19-c86c9ffc125d" (UID: "abd81de6-80f5-4245-9f19-c86c9ffc125d"). InnerVolumeSpecName "kube-api-access-s4gbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.613876 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abd81de6-80f5-4245-9f19-c86c9ffc125d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.614270 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4gbs\" (UniqueName: \"kubernetes.io/projected/abd81de6-80f5-4245-9f19-c86c9ffc125d-kube-api-access-s4gbs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.900463 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerStarted","Data":"0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.908069 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"7b3f944131c6f1201ac98c6a57b8a51ee85f8b9ddc0aec87e7452b12c2dc3229"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910247 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mxqd7" event={"ID":"abd81de6-80f5-4245-9f19-c86c9ffc125d","Type":"ContainerDied","Data":"4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f"} Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910286 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1a2717806c74892d1ca254cca4f103380f8bf5b132b395d5fe11c1c7003b7f" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.910344 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mxqd7" Feb 17 16:16:41 crc kubenswrapper[4829]: I0217 16:16:41.938364 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=32.474811618 podStartE2EDuration="1m11.938341829s" podCreationTimestamp="2026-02-17 16:15:30 +0000 UTC" firstStartedPulling="2026-02-17 16:16:01.907759706 +0000 UTC m=+1274.324777694" lastFinishedPulling="2026-02-17 16:16:41.371289927 +0000 UTC m=+1313.788307905" observedRunningTime="2026-02-17 16:16:41.929556932 +0000 UTC m=+1314.346574910" watchObservedRunningTime="2026-02-17 16:16:41.938341829 +0000 UTC m=+1314.355359807" Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.032245 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:42 crc kubenswrapper[4829]: W0217 16:16:42.035746 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9903a8_9361_4b89_a039_72f3e6023014.slice/crio-df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c WatchSource:0}: Error finding container df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c: Status 404 returned error can't find the container with id df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.921627 4829 generic.go:334] "Generic (PLEG): container finished" podID="ec9903a8-9361-4b89-a039-72f3e6023014" containerID="49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053" exitCode=0 Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.921857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerDied","Data":"49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.923192 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerStarted","Data":"df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933051 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"3c040a41cebf8d70b8baefb52efbd401563a8a49eb0f8b02d93d0f8560f67fba"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933098 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"f69133749a3667523012a8bb406ae6fee9f85ea5a4fe699e60e9cd1cf1035caf"} Feb 17 16:16:42 crc kubenswrapper[4829]: I0217 16:16:42.933112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"d63cf2af0ef6375cfeb0fd533f0aa7bbe23da758b65075cb5582ea1d7fc82df0"} Feb 17 16:16:43 crc kubenswrapper[4829]: I0217 16:16:43.501516 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-75gff" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.415729 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482652 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482770 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482859 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.482984 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") pod \"ec9903a8-9361-4b89-a039-72f3e6023014\" (UID: \"ec9903a8-9361-4b89-a039-72f3e6023014\") " Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483626 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts" (OuterVolumeSpecName: "scripts") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.483940 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484038 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484129 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.484121 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run" (OuterVolumeSpecName: "var-run") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.489919 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n" (OuterVolumeSpecName: "kube-api-access-swn4n") pod "ec9903a8-9361-4b89-a039-72f3e6023014" (UID: "ec9903a8-9361-4b89-a039-72f3e6023014"). InnerVolumeSpecName "kube-api-access-swn4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585872 4829 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585897 4829 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585909 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec9903a8-9361-4b89-a039-72f3e6023014-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585919 4829 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585927 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swn4n\" (UniqueName: \"kubernetes.io/projected/ec9903a8-9361-4b89-a039-72f3e6023014-kube-api-access-swn4n\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.585936 4829 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec9903a8-9361-4b89-a039-72f3e6023014-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-75gff-config-xlnvr" event={"ID":"ec9903a8-9361-4b89-a039-72f3e6023014","Type":"ContainerDied","Data":"df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c"} Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966805 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df261077f361ca8bf6ee3cb32a9210058f363e97492a4f7b82f6585d0079a31c" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.966887 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-75gff-config-xlnvr" Feb 17 16:16:44 crc kubenswrapper[4829]: I0217 16:16:44.997075 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"1c69bdea01d0eb771aaed33e5c219b1787a9254995581723b8a3193237d120ee"} Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.204771 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.228507 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.244424 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.321620 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.523522 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.531932 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-75gff-config-xlnvr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.677769 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678259 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678281 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678305 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678316 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678328 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678336 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: E0217 16:16:45.678364 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678373 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678631 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" containerName="mariadb-database-create" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678651 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" containerName="ovn-config" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678666 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.678696 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" containerName="mariadb-account-create-update" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.686379 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.689451 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.704665 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.704769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.775895 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.777064 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.780078 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.797843 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806930 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.806990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.807072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.807473 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.828690 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"mysqld-exporter-openstack-cell1-db-create-qg7tn\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.908667 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.908779 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.909683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:45 crc kubenswrapper[4829]: I0217 16:16:45.929462 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"mysqld-exporter-5498-account-create-update-qsrnr\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.012883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"05cb9a0c6481f759ae84af3cfad13fe4afda3863a81b78de62eaa011eac0f643"} Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.012921 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"08554e9a8b8a36a92329c01a8fe5df0b356de6aee76a13d35000a6f089ea7dc8"} Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.021763 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.110017 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.320766 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9903a8-9361-4b89-a039-72f3e6023014" path="/var/lib/kubelet/pods/ec9903a8-9361-4b89-a039-72f3e6023014/volumes" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.506895 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.516834 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mxqd7"] Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.530619 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.530666 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.536713 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.616402 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:16:46 crc kubenswrapper[4829]: W0217 16:16:46.623261 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c492d16_f301_449b_a877_a15a17739865.slice/crio-ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a WatchSource:0}: Error finding container ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a: Status 404 returned error can't find the container with id ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a Feb 17 16:16:46 crc kubenswrapper[4829]: I0217 16:16:46.854901 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.035461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"1a5168715961ab0df7d232692dfee428dafc361cfa022f838b5a790e6e42552d"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.035504 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"b02533177233a5d4b6fb93d36bca1cce5b981822103fe41f3cd562b88816d43e"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.036925 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerStarted","Data":"d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.039155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerStarted","Data":"ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a"} Feb 17 16:16:47 crc kubenswrapper[4829]: I0217 16:16:47.040718 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.053637 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"50bc7039faccad056afde70287bb6da898fd1aa0f5e0a321af578d8b7019bda5"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.054159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f22317f-8a58-4b93-b29f-a0e585ac48a9","Type":"ContainerStarted","Data":"cea8a64498ea6d2002aa5f742146b402b9a523b186eca403bb746cab1b2d5f15"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.055920 4829 generic.go:334] "Generic (PLEG): container finished" podID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerID="17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376" exitCode=0 Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.055980 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerDied","Data":"17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.058070 4829 generic.go:334] "Generic (PLEG): container finished" podID="5c492d16-f301-449b-a877-a15a17739865" containerID="6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d" exitCode=0 Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.058840 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerDied","Data":"6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d"} Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.130152 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.939803041 podStartE2EDuration="38.130133299s" podCreationTimestamp="2026-02-17 16:16:10 +0000 UTC" firstStartedPulling="2026-02-17 16:16:28.888348555 +0000 UTC m=+1301.305366543" lastFinishedPulling="2026-02-17 16:16:44.078678823 +0000 UTC m=+1316.495696801" observedRunningTime="2026-02-17 16:16:48.121566588 +0000 UTC m=+1320.538584576" watchObservedRunningTime="2026-02-17 16:16:48.130133299 +0000 UTC m=+1320.547151277" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.305131 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd81de6-80f5-4245-9f19-c86c9ffc125d" path="/var/lib/kubelet/pods/abd81de6-80f5-4245-9f19-c86c9ffc125d/volumes" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.421979 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.423517 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.426797 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.437409 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.572803 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573186 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573505 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.573720 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675656 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675781 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675818 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.675902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676472 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676643 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.676839 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.677356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.677461 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.706757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"dnsmasq-dns-6d5b6d6b67-lpwtt\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:48 crc kubenswrapper[4829]: I0217 16:16:48.779530 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.618129 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619694 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" containerID="cri-o://4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" gracePeriod=600 Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619745 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" containerID="cri-o://0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" gracePeriod=600 Feb 17 16:16:49 crc kubenswrapper[4829]: I0217 16:16:49.619778 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" containerID="cri-o://acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" gracePeriod=600 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085317 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085676 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085688 4829 generic.go:334] "Generic (PLEG): container finished" podID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerID="4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" exitCode=0 Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085383 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a"} Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7"} Feb 17 16:16:50 crc kubenswrapper[4829]: I0217 16:16:50.085734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f"} Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.527723 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.529940 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.530245 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.532612 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.550354 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.649467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.649521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.751349 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.751426 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.753253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.773696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"root-account-create-update-btrfb\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:51 crc kubenswrapper[4829]: I0217 16:16:51.858777 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.424990 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.425374 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.425431 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.426397 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:52 crc kubenswrapper[4829]: I0217 16:16:52.426487 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" gracePeriod=600 Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126769 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" exitCode=0 Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126814 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158"} Feb 17 16:16:53 crc kubenswrapper[4829]: I0217 16:16:53.126869 4829 scope.go:117] "RemoveContainer" containerID="9da0c058c3bb164952f2bac9b04d4f517520fe5227b381c4d352e6c16eaf99c8" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.203045 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.227887 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.243841 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 17 16:16:55 crc kubenswrapper[4829]: I0217 16:16:55.322424 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:16:56 crc kubenswrapper[4829]: I0217 16:16:56.531025 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.137:9090/-/ready\": dial tcp 10.217.0.137:9090: connect: connection refused" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.202462 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" event={"ID":"5c492d16-f301-449b-a877-a15a17739865","Type":"ContainerDied","Data":"ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a"} Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.202779 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef981bbf47d19bd3efada398dbe652ab9869b3a693302a54e884a020088bbd0a" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.236423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" event={"ID":"f2e81e7f-9610-493c-bdb8-6a7de58b94bf","Type":"ContainerDied","Data":"d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53"} Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.236460 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ae82f25aae93b3b2f04e4d55e0c061663830d1dcffecf488a79fe2d2001d53" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.254092 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.272769 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363011 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") pod \"5c492d16-f301-449b-a877-a15a17739865\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363060 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") pod \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") pod \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\" (UID: \"f2e81e7f-9610-493c-bdb8-6a7de58b94bf\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363325 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") pod \"5c492d16-f301-449b-a877-a15a17739865\" (UID: \"5c492d16-f301-449b-a877-a15a17739865\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.363798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c492d16-f301-449b-a877-a15a17739865" (UID: "5c492d16-f301-449b-a877-a15a17739865"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.364106 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c492d16-f301-449b-a877-a15a17739865-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.364534 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2e81e7f-9610-493c-bdb8-6a7de58b94bf" (UID: "f2e81e7f-9610-493c-bdb8-6a7de58b94bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.372391 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx" (OuterVolumeSpecName: "kube-api-access-r4lwx") pod "5c492d16-f301-449b-a877-a15a17739865" (UID: "5c492d16-f301-449b-a877-a15a17739865"). InnerVolumeSpecName "kube-api-access-r4lwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.373826 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg" (OuterVolumeSpecName: "kube-api-access-j49wg") pod "f2e81e7f-9610-493c-bdb8-6a7de58b94bf" (UID: "f2e81e7f-9610-493c-bdb8-6a7de58b94bf"). InnerVolumeSpecName "kube-api-access-j49wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466068 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lwx\" (UniqueName: \"kubernetes.io/projected/5c492d16-f301-449b-a877-a15a17739865-kube-api-access-r4lwx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466099 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.466109 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j49wg\" (UniqueName: \"kubernetes.io/projected/f2e81e7f-9610-493c-bdb8-6a7de58b94bf-kube-api-access-j49wg\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.500333 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.673626 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.673993 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674032 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674073 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674099 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674136 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674178 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.674407 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\" (UID: \"177c70b9-7b56-48f4-abd1-4d7a9c86450a\") " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.675934 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.677790 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.678590 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.679291 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979" (OuterVolumeSpecName: "kube-api-access-bd979") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "kube-api-access-bd979". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.679454 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out" (OuterVolumeSpecName: "config-out") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.685032 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config" (OuterVolumeSpecName: "config") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.693149 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.694734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.723741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "pvc-8e635818-7819-4dc1-bb9c-8b7954e16573". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.739015 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config" (OuterVolumeSpecName: "web-config") pod "177c70b9-7b56-48f4-abd1-4d7a9c86450a" (UID: "177c70b9-7b56-48f4-abd1-4d7a9c86450a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.756915 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:16:57 crc kubenswrapper[4829]: W0217 16:16:57.763533 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf678697_9139_4571_9d3b_9c51ec34df7c.slice/crio-685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97 WatchSource:0}: Error finding container 685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97: Status 404 returned error can't find the container with id 685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97 Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776537 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd979\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-kube-api-access-bd979\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776583 4829 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776594 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776604 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776640 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") on node \"crc\" " Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776652 4829 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776665 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776674 4829 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/177c70b9-7b56-48f4-abd1-4d7a9c86450a-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776682 4829 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/177c70b9-7b56-48f4-abd1-4d7a9c86450a-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.776690 4829 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/177c70b9-7b56-48f4-abd1-4d7a9c86450a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.799567 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.799864 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e635818-7819-4dc1-bb9c-8b7954e16573" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573") on node "crc" Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.873227 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:16:57 crc kubenswrapper[4829]: W0217 16:16:57.873908 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod694bd0d8_2bbe_4f9a_945a_dd7132c0645e.slice/crio-5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043 WatchSource:0}: Error finding container 5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043: Status 404 returned error can't find the container with id 5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043 Feb 17 16:16:57 crc kubenswrapper[4829]: I0217 16:16:57.878299 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.249316 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"177c70b9-7b56-48f4-abd1-4d7a9c86450a","Type":"ContainerDied","Data":"7447c65a301d56c7dfc2822a2a580ecd7354358d540c16f52ee4d7688f3e3462"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.249363 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252745 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252805 4829 scope.go:117] "RemoveContainer" containerID="0d034fb22cb7620682b2ae7b1d730ecfaffd1a5c0b115a77b00b0f8bd1380e9a" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.251236 4829 generic.go:334] "Generic (PLEG): container finished" podID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerID="56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580" exitCode=0 Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.252899 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerStarted","Data":"5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.257132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.262269 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265161 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerStarted","Data":"e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265193 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerStarted","Data":"685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97"} Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.265250 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.312387 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-btrfb" podStartSLOduration=7.312370691 podStartE2EDuration="7.312370691s" podCreationTimestamp="2026-02-17 16:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:58.31088461 +0000 UTC m=+1330.727902588" watchObservedRunningTime="2026-02-17 16:16:58.312370691 +0000 UTC m=+1330.729388669" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.522715 4829 scope.go:117] "RemoveContainer" containerID="acf6f9d209342af6a8dc45cc31107ae469ccefd61ad94baa4d8e87ca307ee4e7" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.577733 4829 scope.go:117] "RemoveContainer" containerID="4e9686172df33f3f8f34f0610354260ae9e859e93a7735f49451d4765d978e9f" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.580581 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.599186 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.607948 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608356 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608373 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608383 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608389 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608408 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608415 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608428 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608435 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608452 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="init-config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608458 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="init-config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: E0217 16:16:58.608469 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608475 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608744 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="config-reloader" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608759 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c492d16-f301-449b-a877-a15a17739865" containerName="mariadb-database-create" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="thanos-sidecar" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608792 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" containerName="mariadb-account-create-update" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.608801 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" containerName="prometheus" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.612873 4829 scope.go:117] "RemoveContainer" containerID="7ea66a13c9f4fb5c69a14c26667ccb13b811f0d2d47f2e4d9fb91e61c8fe4193" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.613263 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616553 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-vxmz6" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616595 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616628 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616734 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616750 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.616848 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.617257 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.624699 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.636221 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.693615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695180 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695320 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.695755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.696050 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.697653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.697770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698236 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698368 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698460 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698552 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698668 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.698754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.800986 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801784 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.801900 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.802011 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.802192 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803403 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803500 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803783 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803855 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.803954 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.804169 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.804914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.805972 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0afff9a0-fd8a-4388-903e-647ae66128db-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.806508 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.806550 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe3c2171ea8e537d787d3308fa5bc6f869ae05d2809df2c7eb9ceb73db78889d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808640 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-config\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0afff9a0-fd8a-4388-903e-647ae66128db-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.808925 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809375 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809523 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809665 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.809708 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afff9a0-fd8a-4388-903e-647ae66128db-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.822220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnqgz\" (UniqueName: \"kubernetes.io/projected/0afff9a0-fd8a-4388-903e-647ae66128db-kube-api-access-fnqgz\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.847754 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e635818-7819-4dc1-bb9c-8b7954e16573\") pod \"prometheus-metric-storage-0\" (UID: \"0afff9a0-fd8a-4388-903e-647ae66128db\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:58 crc kubenswrapper[4829]: I0217 16:16:58.998327 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.273595 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerStarted","Data":"50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.276100 4829 generic.go:334] "Generic (PLEG): container finished" podID="df678697-9139-4571-9d3b-9c51ec34df7c" containerID="e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024" exitCode=0 Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.276171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerDied","Data":"e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.281483 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerStarted","Data":"d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2"} Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.281523 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.302828 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9z4lf" podStartSLOduration=3.670333191 podStartE2EDuration="21.302809634s" podCreationTimestamp="2026-02-17 16:16:38 +0000 UTC" firstStartedPulling="2026-02-17 16:16:39.858304483 +0000 UTC m=+1312.275322461" lastFinishedPulling="2026-02-17 16:16:57.490780936 +0000 UTC m=+1329.907798904" observedRunningTime="2026-02-17 16:16:59.293281997 +0000 UTC m=+1331.710299975" watchObservedRunningTime="2026-02-17 16:16:59.302809634 +0000 UTC m=+1331.719827612" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.339463 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" podStartSLOduration=11.339443483 podStartE2EDuration="11.339443483s" podCreationTimestamp="2026-02-17 16:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:59.327012337 +0000 UTC m=+1331.744030315" watchObservedRunningTime="2026-02-17 16:16:59.339443483 +0000 UTC m=+1331.756461461" Feb 17 16:16:59 crc kubenswrapper[4829]: I0217 16:16:59.517335 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:16:59 crc kubenswrapper[4829]: W0217 16:16:59.522289 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0afff9a0_fd8a_4388_903e_647ae66128db.slice/crio-b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf WatchSource:0}: Error finding container b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf: Status 404 returned error can't find the container with id b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.293808 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177c70b9-7b56-48f4-abd1-4d7a9c86450a" path="/var/lib/kubelet/pods/177c70b9-7b56-48f4-abd1-4d7a9c86450a/volumes" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.295611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"b87f0b01e276d38d17d51e09a81958de7e0cc882b53fa07e243a4e7c38394baf"} Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.726279 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.759530 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") pod \"df678697-9139-4571-9d3b-9c51ec34df7c\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.759768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") pod \"df678697-9139-4571-9d3b-9c51ec34df7c\" (UID: \"df678697-9139-4571-9d3b-9c51ec34df7c\") " Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.760325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df678697-9139-4571-9d3b-9c51ec34df7c" (UID: "df678697-9139-4571-9d3b-9c51ec34df7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.760543 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df678697-9139-4571-9d3b-9c51ec34df7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.772951 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9" (OuterVolumeSpecName: "kube-api-access-lxgs9") pod "df678697-9139-4571-9d3b-9c51ec34df7c" (UID: "df678697-9139-4571-9d3b-9c51ec34df7c"). InnerVolumeSpecName "kube-api-access-lxgs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:00 crc kubenswrapper[4829]: I0217 16:17:00.862354 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgs9\" (UniqueName: \"kubernetes.io/projected/df678697-9139-4571-9d3b-9c51ec34df7c-kube-api-access-lxgs9\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.034683 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: E0217 16:17:01.035476 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.035503 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.035827 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" containerName="mariadb-account-create-update" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.037065 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.039978 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.046914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067559 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067633 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.067670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.169854 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.170040 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.170066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.175549 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.176136 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.206909 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"mysqld-exporter-0\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.305602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btrfb" event={"ID":"df678697-9139-4571-9d3b-9c51ec34df7c","Type":"ContainerDied","Data":"685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97"} Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.305646 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685a3273cfd11b2f3ca9ee62e28acc8daa97846f2240a7fcc9094adc2d2d1f97" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.306748 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btrfb" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.359013 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:17:01 crc kubenswrapper[4829]: I0217 16:17:01.867453 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:17:01 crc kubenswrapper[4829]: W0217 16:17:01.950410 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4cfa907_6caa_41a9_b86a_371fd960e471.slice/crio-16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c WatchSource:0}: Error finding container 16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c: Status 404 returned error can't find the container with id 16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c Feb 17 16:17:02 crc kubenswrapper[4829]: I0217 16:17:02.317977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerStarted","Data":"16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c"} Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.336684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a"} Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.783228 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.855611 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:03 crc kubenswrapper[4829]: I0217 16:17:03.856237 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" containerID="cri-o://4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" gracePeriod=10 Feb 17 16:17:04 crc kubenswrapper[4829]: I0217 16:17:04.348387 4829 generic.go:334] "Generic (PLEG): container finished" podID="a954ada0-6e54-469b-a010-3da22abd6a61" containerID="4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" exitCode=0 Feb 17 16:17:04 crc kubenswrapper[4829]: I0217 16:17:04.348532 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89"} Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.204869 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.229720 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:17:05 crc kubenswrapper[4829]: I0217 16:17:05.275812 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.400959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" event={"ID":"a954ada0-6e54-469b-a010-3da22abd6a61","Type":"ContainerDied","Data":"db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec"} Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.401678 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db50ecd6bfd34140244de05f54d95a706f8227929aa7b76a78ffa8de2545a0ec" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.490096 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606660 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.606960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.607000 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") pod \"a954ada0-6e54-469b-a010-3da22abd6a61\" (UID: \"a954ada0-6e54-469b-a010-3da22abd6a61\") " Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.614629 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f" (OuterVolumeSpecName: "kube-api-access-cl46f") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "kube-api-access-cl46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.662665 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.667994 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.672241 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.698392 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config" (OuterVolumeSpecName: "config") pod "a954ada0-6e54-469b-a010-3da22abd6a61" (UID: "a954ada0-6e54-469b-a010-3da22abd6a61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709477 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709511 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl46f\" (UniqueName: \"kubernetes.io/projected/a954ada0-6e54-469b-a010-3da22abd6a61-kube-api-access-cl46f\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709523 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709535 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:06 crc kubenswrapper[4829]: I0217 16:17:06.709545 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a954ada0-6e54-469b-a010-3da22abd6a61-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.413849 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerStarted","Data":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.416588 4829 generic.go:334] "Generic (PLEG): container finished" podID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerID="50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d" exitCode=0 Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.416662 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-tz7z4" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.420860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerDied","Data":"50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d"} Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.442038 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.0657374 podStartE2EDuration="6.442019529s" podCreationTimestamp="2026-02-17 16:17:01 +0000 UTC" firstStartedPulling="2026-02-17 16:17:01.952369867 +0000 UTC m=+1334.369387845" lastFinishedPulling="2026-02-17 16:17:06.328651996 +0000 UTC m=+1338.745669974" observedRunningTime="2026-02-17 16:17:07.439593483 +0000 UTC m=+1339.856611461" watchObservedRunningTime="2026-02-17 16:17:07.442019529 +0000 UTC m=+1339.859037497" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.490891 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.516685 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-tz7z4"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528006 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:07 crc kubenswrapper[4829]: E0217 16:17:07.528646 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528664 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: E0217 16:17:07.528676 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="init" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528682 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="init" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.528933 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" containerName="dnsmasq-dns" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.530271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.542748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.630199 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.631545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.631626 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.632035 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.635334 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.649779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733469 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.733595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.734467 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.757654 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"cinder-db-create-wlnfn\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.809323 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.811589 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.821307 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.829640 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.830992 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.835917 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.841729 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.842412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.845727 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.852894 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.876798 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.917755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"cinder-2cec-account-create-update-hfc78\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.939676 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.941288 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947491 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947672 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947778 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947879 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.947998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.960351 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:07 crc kubenswrapper[4829]: I0217 16:17:07.964602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.013255 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.015057 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.026922 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.028501 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033299 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033480 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.033927 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.034213 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.040259 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.049995 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.051717 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.051897 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052150 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052427 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.052708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.056404 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057355 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058327 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.057425 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.058720 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.053760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.059695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.068761 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.084167 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.090206 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"heat-db-create-gvpcv\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.092763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"heat-0c9f-account-create-update-htzx9\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.107024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"barbican-db-create-sgsbf\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.108881 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.153896 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.157973 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161184 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161886 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.161962 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162145 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162190 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162443 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.162592 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.163739 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.164472 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.171031 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.176556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.182174 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.183871 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.185599 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.193874 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"barbican-d7b6-account-create-update-n4xbx\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.194933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"neutron-db-create-tfzp7\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.199877 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"keystone-db-sync-cs5v7\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.264615 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.264881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.307470 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a954ada0-6e54-469b-a010-3da22abd6a61" path="/var/lib/kubelet/pods/a954ada0-6e54-469b-a010-3da22abd6a61/volumes" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.365892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.366321 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.370692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.407612 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"neutron-0525-account-create-update-t6qsf\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.432760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.436922 4829 generic.go:334] "Generic (PLEG): container finished" podID="0afff9a0-fd8a-4388-903e-647ae66128db" containerID="1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a" exitCode=0 Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.437963 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerDied","Data":"1c2e467445d67780c535b7751bf7160bbaeb96f682007df78a696a84795b076a"} Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.445431 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.463442 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.498124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.575540 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.786591 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:17:08 crc kubenswrapper[4829]: I0217 16:17:08.903464 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.241684 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.288617 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319180 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.319302 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") pod \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\" (UID: \"e14bea24-3170-4bdb-8811-9a94d94ae4b7\") " Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.334510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc" (OuterVolumeSpecName: "kube-api-access-njvhc") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "kube-api-access-njvhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.353820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.368036 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.394418 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.436963 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.441443 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njvhc\" (UniqueName: \"kubernetes.io/projected/e14bea24-3170-4bdb-8811-9a94d94ae4b7-kube-api-access-njvhc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.441468 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.443141 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.450231 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerStarted","Data":"026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.451559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerStarted","Data":"4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.458135 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"ca7725561433222ef92fd7ad0ec590cf20bab7b196d6e1f6e9339f9b216776bd"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.461494 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data" (OuterVolumeSpecName: "config-data") pod "e14bea24-3170-4bdb-8811-9a94d94ae4b7" (UID: "e14bea24-3170-4bdb-8811-9a94d94ae4b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.462944 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerStarted","Data":"414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.462978 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerStarted","Data":"30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.467623 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerStarted","Data":"e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.467655 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerStarted","Data":"0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.475822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerStarted","Data":"2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.475870 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerStarted","Data":"872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9z4lf" event={"ID":"e14bea24-3170-4bdb-8811-9a94d94ae4b7","Type":"ContainerDied","Data":"a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485933 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16d0f1e0e97a12bd28aa936f9602f11430168deb6ed4d7c8a39566f449c5b8e" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.485995 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9z4lf" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.506187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerStarted","Data":"3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c"} Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.516443 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2cec-account-create-update-hfc78" podStartSLOduration=2.516348121 podStartE2EDuration="2.516348121s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.488960841 +0000 UTC m=+1341.905978819" watchObservedRunningTime="2026-02-17 16:17:09.516348121 +0000 UTC m=+1341.933366099" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.545180 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14bea24-3170-4bdb-8811-9a94d94ae4b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.561356 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wlnfn" podStartSLOduration=2.561338645 podStartE2EDuration="2.561338645s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.512770114 +0000 UTC m=+1341.929788092" watchObservedRunningTime="2026-02-17 16:17:09.561338645 +0000 UTC m=+1341.978356623" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.572841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-sgsbf" podStartSLOduration=2.572823375 podStartE2EDuration="2.572823375s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:09.527673006 +0000 UTC m=+1341.944690984" watchObservedRunningTime="2026-02-17 16:17:09.572823375 +0000 UTC m=+1341.989841353" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.756092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:17:09 crc kubenswrapper[4829]: W0217 16:17:09.758972 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fb73f59_cddf_4630_b754_264ec2ccee1e.slice/crio-6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6 WatchSource:0}: Error finding container 6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6: Status 404 returned error can't find the container with id 6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6 Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.770168 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.803099 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.901453 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:09 crc kubenswrapper[4829]: E0217 16:17:09.901968 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.901986 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.902919 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" containerName="glance-db-sync" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.904119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.925861 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960028 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960099 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960164 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960212 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:09 crc kubenswrapper[4829]: I0217 16:17:09.960363 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.066561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.067757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.068369 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.069191 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.070792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.070965 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.071960 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.072451 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.073002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.090595 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"dnsmasq-dns-895cf5cf-k8994\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.245838 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.524399 4829 generic.go:334] "Generic (PLEG): container finished" podID="45907bce-01ca-47e8-bfef-12ae037bb254" containerID="61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.524759 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerDied","Data":"61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.561567 4829 generic.go:334] "Generic (PLEG): container finished" podID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerID="4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.563526 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerDied","Data":"4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.575142 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerStarted","Data":"1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.575185 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerStarted","Data":"6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.581819 4829 generic.go:334] "Generic (PLEG): container finished" podID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerID="414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.581949 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerDied","Data":"414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.591464 4829 generic.go:334] "Generic (PLEG): container finished" podID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerID="e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.591529 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerDied","Data":"e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606687 4829 generic.go:334] "Generic (PLEG): container finished" podID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerID="0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606747 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerDied","Data":"0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.606768 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerStarted","Data":"a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.629409 4829 generic.go:334] "Generic (PLEG): container finished" podID="64394b7b-175f-4429-b284-783394b5362b" containerID="a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.630365 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerDied","Data":"a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.661428 4829 generic.go:334] "Generic (PLEG): container finished" podID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerID="2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.661502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerDied","Data":"2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.664948 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerStarted","Data":"0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847"} Feb 17 16:17:10 crc kubenswrapper[4829]: I0217 16:17:10.803884 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.685013 4829 generic.go:334] "Generic (PLEG): container finished" podID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerID="1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f" exitCode=0 Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.685327 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerDied","Data":"1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f"} Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.706779 4829 generic.go:334] "Generic (PLEG): container finished" podID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerID="06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd" exitCode=0 Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.707736 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd"} Feb 17 16:17:11 crc kubenswrapper[4829]: I0217 16:17:11.707763 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerStarted","Data":"4f1a71803b633d03391de17f6f16604c5e107eae12d0b26db71e47dca08add20"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.290642 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.436187 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") pod \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.436484 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") pod \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\" (UID: \"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.438293 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" (UID: "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.446632 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj" (OuterVolumeSpecName: "kube-api-access-g7xwj") pod "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" (UID: "043875d4-c1c8-4363-95ca-a7ad4a1d7ae4"). InnerVolumeSpecName "kube-api-access-g7xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.540551 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.540605 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7xwj\" (UniqueName: \"kubernetes.io/projected/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4-kube-api-access-g7xwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.571367 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.574895 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.610833 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.616689 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.628528 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.638863 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641771 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") pod \"f7208dff-6f9e-410a-9b88-e6def8b38478\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641844 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") pod \"a1857247-1b55-4f04-91b5-2725347ddd5e\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641927 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") pod \"a1857247-1b55-4f04-91b5-2725347ddd5e\" (UID: \"a1857247-1b55-4f04-91b5-2725347ddd5e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.641956 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") pod \"f7208dff-6f9e-410a-9b88-e6def8b38478\" (UID: \"f7208dff-6f9e-410a-9b88-e6def8b38478\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642190 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7208dff-6f9e-410a-9b88-e6def8b38478" (UID: "f7208dff-6f9e-410a-9b88-e6def8b38478"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642523 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1857247-1b55-4f04-91b5-2725347ddd5e" (UID: "a1857247-1b55-4f04-91b5-2725347ddd5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642937 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1857247-1b55-4f04-91b5-2725347ddd5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.642951 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7208dff-6f9e-410a-9b88-e6def8b38478-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.643864 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.650186 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8" (OuterVolumeSpecName: "kube-api-access-g77j8") pod "f7208dff-6f9e-410a-9b88-e6def8b38478" (UID: "f7208dff-6f9e-410a-9b88-e6def8b38478"). InnerVolumeSpecName "kube-api-access-g77j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.657898 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk" (OuterVolumeSpecName: "kube-api-access-2blsk") pod "a1857247-1b55-4f04-91b5-2725347ddd5e" (UID: "a1857247-1b55-4f04-91b5-2725347ddd5e"). InnerVolumeSpecName "kube-api-access-2blsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.734374 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wlnfn" event={"ID":"964c7b6b-c551-489a-9a5b-7fbe31c855b2","Type":"ContainerDied","Data":"0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.735238 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0703e077391acefd8e35f7efbf79a73d90e017be6e28ab3ff2f62ffbae693283" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.734387 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wlnfn" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741351 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sgsbf" event={"ID":"043875d4-c1c8-4363-95ca-a7ad4a1d7ae4","Type":"ContainerDied","Data":"872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741389 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872f1d11a822806481ffbe83ab191136e39f8381223e1689f368a1f897319626" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.741432 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sgsbf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.743873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") pod \"84ad18d3-95f7-43e4-b906-65466cf9b14f\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.743926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") pod \"45907bce-01ca-47e8-bfef-12ae037bb254\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744034 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") pod \"45907bce-01ca-47e8-bfef-12ae037bb254\" (UID: \"45907bce-01ca-47e8-bfef-12ae037bb254\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744060 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") pod \"64394b7b-175f-4429-b284-783394b5362b\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744089 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") pod \"64394b7b-175f-4429-b284-783394b5362b\" (UID: \"64394b7b-175f-4429-b284-783394b5362b\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744109 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") pod \"5fb73f59-cddf-4630-b754-264ec2ccee1e\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744147 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") pod \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") pod \"84ad18d3-95f7-43e4-b906-65466cf9b14f\" (UID: \"84ad18d3-95f7-43e4-b906-65466cf9b14f\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744245 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") pod \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\" (UID: \"964c7b6b-c551-489a-9a5b-7fbe31c855b2\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744265 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") pod \"5fb73f59-cddf-4630-b754-264ec2ccee1e\" (UID: \"5fb73f59-cddf-4630-b754-264ec2ccee1e\") " Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744733 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77j8\" (UniqueName: \"kubernetes.io/projected/f7208dff-6f9e-410a-9b88-e6def8b38478-kube-api-access-g77j8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744757 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blsk\" (UniqueName: \"kubernetes.io/projected/a1857247-1b55-4f04-91b5-2725347ddd5e-kube-api-access-2blsk\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tfzp7" event={"ID":"45907bce-01ca-47e8-bfef-12ae037bb254","Type":"ContainerDied","Data":"3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744839 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9590fae7c1dde9b0174e98b8614755f44bc32b17edcc75cd64acfe1cf39c2c" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.744877 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tfzp7" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64394b7b-175f-4429-b284-783394b5362b" (UID: "64394b7b-175f-4429-b284-783394b5362b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745182 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fb73f59-cddf-4630-b754-264ec2ccee1e" (UID: "5fb73f59-cddf-4630-b754-264ec2ccee1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745589 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84ad18d3-95f7-43e4-b906-65466cf9b14f" (UID: "84ad18d3-95f7-43e4-b906-65466cf9b14f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.745923 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "964c7b6b-c551-489a-9a5b-7fbe31c855b2" (UID: "964c7b6b-c551-489a-9a5b-7fbe31c855b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.746806 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45907bce-01ca-47e8-bfef-12ae037bb254" (UID: "45907bce-01ca-47e8-bfef-12ae037bb254"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.749167 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx" (OuterVolumeSpecName: "kube-api-access-jmqqx") pod "964c7b6b-c551-489a-9a5b-7fbe31c855b2" (UID: "964c7b6b-c551-489a-9a5b-7fbe31c855b2"). InnerVolumeSpecName "kube-api-access-jmqqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.750425 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85" (OuterVolumeSpecName: "kube-api-access-vww85") pod "5fb73f59-cddf-4630-b754-264ec2ccee1e" (UID: "5fb73f59-cddf-4630-b754-264ec2ccee1e"). InnerVolumeSpecName "kube-api-access-vww85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5" (OuterVolumeSpecName: "kube-api-access-f5mc5") pod "45907bce-01ca-47e8-bfef-12ae037bb254" (UID: "45907bce-01ca-47e8-bfef-12ae037bb254"). InnerVolumeSpecName "kube-api-access-f5mc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d7b6-account-create-update-n4xbx" event={"ID":"5fb73f59-cddf-4630-b754-264ec2ccee1e","Type":"ContainerDied","Data":"6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751762 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed3dc2d4974f712d0f8671264923517221fdfc7c4c80e4e449788479e03b0d6" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.751708 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d7b6-account-create-update-n4xbx" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.753096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7" (OuterVolumeSpecName: "kube-api-access-drcd7") pod "64394b7b-175f-4429-b284-783394b5362b" (UID: "64394b7b-175f-4429-b284-783394b5362b"). InnerVolumeSpecName "kube-api-access-drcd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.754314 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"318f8d43e12a3179e894e2996e37bee062931a3036d8b7a57c8e1d5e759380f1"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.757508 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t" (OuterVolumeSpecName: "kube-api-access-kpj6t") pod "84ad18d3-95f7-43e4-b906-65466cf9b14f" (UID: "84ad18d3-95f7-43e4-b906-65466cf9b14f"). InnerVolumeSpecName "kube-api-access-kpj6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758017 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2cec-account-create-update-hfc78" event={"ID":"84ad18d3-95f7-43e4-b906-65466cf9b14f","Type":"ContainerDied","Data":"30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758050 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2cec-account-create-update-hfc78" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.758057 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30771b3bf1afe54045b0be5536bee09d00e80acf7acdda2bbb0cddd11a422621" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.764797 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerStarted","Data":"111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770004 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0525-account-create-update-t6qsf" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770060 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0525-account-create-update-t6qsf" event={"ID":"a1857247-1b55-4f04-91b5-2725347ddd5e","Type":"ContainerDied","Data":"a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.770091 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49edc71ae545447d4224438936bc76c426ea4b9594559942c407b822604bd66" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0c9f-account-create-update-htzx9" event={"ID":"64394b7b-175f-4429-b284-783394b5362b","Type":"ContainerDied","Data":"026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777146 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026c498142386cd19b141428ad1df9a23e2816b070449feaf37d7ff5e3a40483" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.777190 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0c9f-account-create-update-htzx9" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.786841 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podStartSLOduration=3.78682607 podStartE2EDuration="3.78682607s" podCreationTimestamp="2026-02-17 16:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.786003388 +0000 UTC m=+1345.203021366" watchObservedRunningTime="2026-02-17 16:17:12.78682607 +0000 UTC m=+1345.203844048" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788882 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-gvpcv" event={"ID":"f7208dff-6f9e-410a-9b88-e6def8b38478","Type":"ContainerDied","Data":"4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820"} Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788924 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a951ade5ac8ae8a7631c3e49e92907140c256ad624ca9740ab0c39a21cc6820" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.788989 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-gvpcv" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848183 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64394b7b-175f-4429-b284-783394b5362b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848543 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vww85\" (UniqueName: \"kubernetes.io/projected/5fb73f59-cddf-4630-b754-264ec2ccee1e-kube-api-access-vww85\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848900 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmqqx\" (UniqueName: \"kubernetes.io/projected/964c7b6b-c551-489a-9a5b-7fbe31c855b2-kube-api-access-jmqqx\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848938 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ad18d3-95f7-43e4-b906-65466cf9b14f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848951 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/964c7b6b-c551-489a-9a5b-7fbe31c855b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848965 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb73f59-cddf-4630-b754-264ec2ccee1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.848981 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpj6t\" (UniqueName: \"kubernetes.io/projected/84ad18d3-95f7-43e4-b906-65466cf9b14f-kube-api-access-kpj6t\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849008 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mc5\" (UniqueName: \"kubernetes.io/projected/45907bce-01ca-47e8-bfef-12ae037bb254-kube-api-access-f5mc5\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849021 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45907bce-01ca-47e8-bfef-12ae037bb254-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:12 crc kubenswrapper[4829]: I0217 16:17:12.849034 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drcd7\" (UniqueName: \"kubernetes.io/projected/64394b7b-175f-4429-b284-783394b5362b-kube-api-access-drcd7\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:13 crc kubenswrapper[4829]: I0217 16:17:13.805693 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:19 crc kubenswrapper[4829]: I0217 16:17:19.877177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0afff9a0-fd8a-4388-903e-647ae66128db","Type":"ContainerStarted","Data":"2dcdfce0630d694970e5143b2118a6e5bf6a933de71be67ae3cce25ba6df4523"} Feb 17 16:17:19 crc kubenswrapper[4829]: I0217 16:17:19.920421 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.920405151 podStartE2EDuration="21.920405151s" podCreationTimestamp="2026-02-17 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:19.916829685 +0000 UTC m=+1352.333847673" watchObservedRunningTime="2026-02-17 16:17:19.920405151 +0000 UTC m=+1352.337423129" Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.247841 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.365544 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.365876 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" containerID="cri-o://d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" gracePeriod=10 Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.891738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerStarted","Data":"0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237"} Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.896149 4829 generic.go:334] "Generic (PLEG): container finished" podID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerID="d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" exitCode=0 Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.896520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2"} Feb 17 16:17:20 crc kubenswrapper[4829]: I0217 16:17:20.909294 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cs5v7" podStartSLOduration=3.295865659 podStartE2EDuration="13.909276963s" podCreationTimestamp="2026-02-17 16:17:07 +0000 UTC" firstStartedPulling="2026-02-17 16:17:09.877191734 +0000 UTC m=+1342.294209702" lastFinishedPulling="2026-02-17 16:17:20.490603028 +0000 UTC m=+1352.907621006" observedRunningTime="2026-02-17 16:17:20.908399979 +0000 UTC m=+1353.325417977" watchObservedRunningTime="2026-02-17 16:17:20.909276963 +0000 UTC m=+1353.326294931" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.089794 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.250944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.250992 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251048 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251124 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.251290 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") pod \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\" (UID: \"694bd0d8-2bbe-4f9a-945a-dd7132c0645e\") " Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.265495 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4" (OuterVolumeSpecName: "kube-api-access-8drp4") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "kube-api-access-8drp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.301362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.302099 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.305511 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.308156 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.309114 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config" (OuterVolumeSpecName: "config") pod "694bd0d8-2bbe-4f9a-945a-dd7132c0645e" (UID: "694bd0d8-2bbe-4f9a-945a-dd7132c0645e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354561 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drp4\" (UniqueName: \"kubernetes.io/projected/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-kube-api-access-8drp4\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354605 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354616 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354626 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354635 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.354644 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/694bd0d8-2bbe-4f9a-945a-dd7132c0645e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.913906 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.914822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lpwtt" event={"ID":"694bd0d8-2bbe-4f9a-945a-dd7132c0645e","Type":"ContainerDied","Data":"5c15c2540b28010efef3741ba18add9744ad7c41a559f9823a7590310cf46043"} Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.914944 4829 scope.go:117] "RemoveContainer" containerID="d096aaedb43a804772caefb7d86ddab3a6196df5bcdaa639ede6cc65fcebd4a2" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.961125 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.969005 4829 scope.go:117] "RemoveContainer" containerID="56ef58bc306789ee179a130a44f779838212093716a520eb452c992bd9d4c580" Feb 17 16:17:21 crc kubenswrapper[4829]: I0217 16:17:21.973452 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lpwtt"] Feb 17 16:17:22 crc kubenswrapper[4829]: I0217 16:17:22.303436 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" path="/var/lib/kubelet/pods/694bd0d8-2bbe-4f9a-945a-dd7132c0645e/volumes" Feb 17 16:17:24 crc kubenswrapper[4829]: I0217 16:17:24.000503 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:26 crc kubenswrapper[4829]: I0217 16:17:26.975975 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerID="0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237" exitCode=0 Feb 17 16:17:26 crc kubenswrapper[4829]: I0217 16:17:26.976430 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerDied","Data":"0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237"} Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.434680 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.530710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.531497 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.531647 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") pod \"3fd83d7c-5347-49c7-a979-d63e812d294c\" (UID: \"3fd83d7c-5347-49c7-a979-d63e812d294c\") " Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.539071 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf2e81e7f-9610-493c-bdb8-6a7de58b94bf"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2e81e7f_9610_493c_bdb8_6a7de58b94bf.slice" Feb 17 16:17:28 crc kubenswrapper[4829]: E0217 16:17:28.539142 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf2e81e7f-9610-493c-bdb8-6a7de58b94bf] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2e81e7f_9610_493c_bdb8_6a7de58b94bf.slice" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.550918 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr" (OuterVolumeSpecName: "kube-api-access-pp9qr") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "kube-api-access-pp9qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.571826 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.607780 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data" (OuterVolumeSpecName: "config-data") pod "3fd83d7c-5347-49c7-a979-d63e812d294c" (UID: "3fd83d7c-5347-49c7-a979-d63e812d294c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634278 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634322 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd83d7c-5347-49c7-a979-d63e812d294c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.634336 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9qr\" (UniqueName: \"kubernetes.io/projected/3fd83d7c-5347-49c7-a979-d63e812d294c-kube-api-access-pp9qr\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.998974 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-5498-account-create-update-qsrnr" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cs5v7" event={"ID":"3fd83d7c-5347-49c7-a979-d63e812d294c","Type":"ContainerDied","Data":"0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847"} Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999502 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3c059c97c39996e4604b26fe9a8e4a1f70186b28b28a4577db730ace130847" Feb 17 16:17:28 crc kubenswrapper[4829]: I0217 16:17:28.999130 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cs5v7" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.000355 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.009242 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.301584 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302764 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302781 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302799 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302813 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302826 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302834 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302842 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302847 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302862 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302868 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302916 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="init" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302923 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="init" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302930 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302936 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302949 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302954 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302964 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302970 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.302985 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.302991 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: E0217 16:17:29.303004 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303009 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303282 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303298 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303305 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303311 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303323 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="64394b7b-175f-4429-b284-783394b5362b" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303335 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303344 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" containerName="mariadb-account-create-update" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303353 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" containerName="mariadb-database-create" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303360 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="694bd0d8-2bbe-4f9a-945a-dd7132c0645e" containerName="dnsmasq-dns" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.303372 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" containerName="keystone-db-sync" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.307638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.328622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.357686 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.358979 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.363271 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.368138 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.371778 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.372352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.378795 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.382807 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459352 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459471 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459502 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459523 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459666 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459768 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.459791 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460103 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.460175 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.463636 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.465758 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.474601 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nfxjw" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.474806 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.488119 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.548786 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.550750 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.561693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.561882 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.562051 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8kvfc" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563817 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563953 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563969 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.563984 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564003 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564033 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564048 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564070 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564134 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.564149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568004 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568248 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.568872 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.570991 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.572418 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.573147 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.576901 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.577514 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.585365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.593066 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.616836 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.619329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"keystone-bootstrap-7l7pb\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.631971 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"dnsmasq-dns-6c9c9f998c-lk9d8\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.652893 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666928 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666972 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.666993 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667046 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667072 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667131 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.667148 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.708523 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.724039 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.725694 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.732380 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-68q4f" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.733721 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.767121 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770240 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770323 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770342 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770421 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770481 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.770496 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.775157 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.795939 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.801504 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.804494 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.805177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.805356 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.809370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.810686 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"cinder-db-sync-n46p8\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.813283 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"heat-db-sync-mgkjx\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.813376 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.814744 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.833261 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p9cb5" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.833494 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.834109 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.834801 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.854311 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.872844 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.872920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.873071 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.920337 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.963032 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.965372 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977392 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977457 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977477 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977596 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977647 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.977725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:29 crc kubenswrapper[4829]: I0217 16:17:29.981760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:29.998091 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.048510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"barbican-db-sync-xh926\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.080991 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081041 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081077 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081111 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081162 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081206 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081271 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081366 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.081875 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.083471 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.105883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.111271 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.116011 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.127234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.135668 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145145 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145227 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfff2" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.145544 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.157172 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.160253 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.160978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.168684 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"placement-db-sync-8s649\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.217978 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218037 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218161 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218193 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218270 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218355 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.218412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.222965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.224876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.225820 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.227415 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.227958 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.228414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.229149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.243302 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.246071 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.258506 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.258662 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.269622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.299933 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"dnsmasq-dns-57c957c4ff-kjjvn\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.323947 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.323988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324223 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324239 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324265 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.324317 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.335181 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.335339 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.347384 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.350377 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"neutron-db-sync-jrh5n\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.430090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431295 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431336 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431407 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.431449 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.437202 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.437483 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.444732 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.448088 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.449743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.453317 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.455863 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.457721 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459135 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459272 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"ceilometer-0\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.459843 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xbdvq" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.460660 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.462344 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.465462 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.503180 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.534935 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535084 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535115 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535163 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535192 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535213 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.535230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.548730 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.555874 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.558369 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.558592 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.567452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.623689 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.629994 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636447 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636502 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636526 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636542 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636599 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636675 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636748 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636778 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636804 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636822 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636879 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.636902 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.637186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.637266 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.640303 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.642253 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.643391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.643414 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.658843 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.665319 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.665348 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738508 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738602 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738691 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738730 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.738857 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.739697 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.739758 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.741055 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744134 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744534 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744554 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.744895 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.745750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.749702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.756211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.777178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.782745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.825139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.877095 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:30 crc kubenswrapper[4829]: I0217 16:17:30.899712 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:17:30 crc kubenswrapper[4829]: W0217 16:17:30.965224 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d3ed60_8c68_44ec_aaa1_806b5aec5df1.slice/crio-0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958 WatchSource:0}: Error finding container 0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958: Status 404 returned error can't find the container with id 0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.123738 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerStarted","Data":"d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.125413 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerStarted","Data":"0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.127925 4829 generic.go:334] "Generic (PLEG): container finished" podID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerID="7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933" exitCode=0 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.129112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerDied","Data":"7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.129144 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerStarted","Data":"72a8b4daac2d9d070607a45eeb2b33af1441c752a45b30e8f19c0d738ce701e3"} Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.324249 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:17:31 crc kubenswrapper[4829]: W0217 16:17:31.326693 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff4740d_5b36_4273_be02_50bec771e157.slice/crio-d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365 WatchSource:0}: Error finding container d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365: Status 404 returned error can't find the container with id d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365 Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.333830 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.369158 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.401865 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.755023 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.801661 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.837880 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.869659 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:31 crc kubenswrapper[4829]: I0217 16:17:31.909645 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.082625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.082960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083137 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083245 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.083350 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") pod \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\" (UID: \"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db\") " Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.109861 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.128961 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x" (OuterVolumeSpecName: "kube-api-access-wlx4x") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "kube-api-access-wlx4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: W0217 16:17:32.132617 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb920f32_c8e7_45d7_8c19_40ae485d7c2f.slice/crio-f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6 WatchSource:0}: Error finding container f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6: Status 404 returned error can't find the container with id f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6 Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.139642 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.152576 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.157603 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerStarted","Data":"d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.160265 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.160642 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerStarted","Data":"c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.163061 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166684 4829 generic.go:334] "Generic (PLEG): container finished" podID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerID="1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06" exitCode=0 Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166775 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerStarted","Data":"de029d86f193dd1c04a644dfbce66d4d5a98f68124c1549de6eaa99d3eb1caa6"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.166891 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config" (OuterVolumeSpecName: "config") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.168243 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" (UID: "3ab5e213-ae02-408f-98ef-9ed6ecf2a1db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.176610 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.176617 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-lk9d8" event={"ID":"3ab5e213-ae02-408f-98ef-9ed6ecf2a1db","Type":"ContainerDied","Data":"72a8b4daac2d9d070607a45eeb2b33af1441c752a45b30e8f19c0d738ce701e3"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.177335 4829 scope.go:117] "RemoveContainer" containerID="7717e0abff97db00eb31038c0449ff24b3a105f718ca0307ac24d78103600933" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193539 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193579 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193593 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlx4x\" (UniqueName: \"kubernetes.io/projected/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-kube-api-access-wlx4x\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193626 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193638 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193650 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.193855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerStarted","Data":"8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.203429 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerStarted","Data":"add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.209530 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.214434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerStarted","Data":"1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.214688 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerStarted","Data":"7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83"} Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.242818 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7l7pb" podStartSLOduration=3.242800919 podStartE2EDuration="3.242800919s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:32.222380587 +0000 UTC m=+1364.639398565" watchObservedRunningTime="2026-02-17 16:17:32.242800919 +0000 UTC m=+1364.659818897" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.267788 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jrh5n" podStartSLOduration=3.267761773 podStartE2EDuration="3.267761773s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:32.237038013 +0000 UTC m=+1364.654056001" watchObservedRunningTime="2026-02-17 16:17:32.267761773 +0000 UTC m=+1364.684779751" Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.337736 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.354109 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-lk9d8"] Feb 17 16:17:32 crc kubenswrapper[4829]: I0217 16:17:32.837323 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.263042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerStarted","Data":"4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.265051 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.286329 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podStartSLOduration=4.286309645 podStartE2EDuration="4.286309645s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:33.285789121 +0000 UTC m=+1365.702807119" watchObservedRunningTime="2026-02-17 16:17:33.286309645 +0000 UTC m=+1365.703327623" Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.289884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.293857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886"} Feb 17 16:17:33 crc kubenswrapper[4829]: I0217 16:17:33.293903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6"} Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.298504 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" path="/var/lib/kubelet/pods/3ab5e213-ae02-408f-98ef-9ed6ecf2a1db/volumes" Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.315073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerStarted","Data":"6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f"} Feb 17 16:17:34 crc kubenswrapper[4829]: I0217 16:17:34.317105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8"} Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329564 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerStarted","Data":"5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736"} Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329926 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" containerID="cri-o://435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.330008 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" containerID="cri-o://6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329805 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" containerID="cri-o://5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.329608 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" containerID="cri-o://ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" gracePeriod=30 Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.367171 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.367153281 podStartE2EDuration="6.367153281s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:35.349122894 +0000 UTC m=+1367.766140872" watchObservedRunningTime="2026-02-17 16:17:35.367153281 +0000 UTC m=+1367.784171259" Feb 17 16:17:35 crc kubenswrapper[4829]: I0217 16:17:35.390826 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.3908102190000005 podStartE2EDuration="6.390810219s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:35.379012291 +0000 UTC m=+1367.796030269" watchObservedRunningTime="2026-02-17 16:17:35.390810219 +0000 UTC m=+1367.807828197" Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.345366 4829 generic.go:334] "Generic (PLEG): container finished" podID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerID="add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491" exitCode=0 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.345454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerDied","Data":"add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349033 4829 generic.go:334] "Generic (PLEG): container finished" podID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerID="6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" exitCode=0 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349069 4829 generic.go:334] "Generic (PLEG): container finished" podID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerID="435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.349119 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351443 4829 generic.go:334] "Generic (PLEG): container finished" podID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerID="5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351463 4829 generic.go:334] "Generic (PLEG): container finished" podID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerID="ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" exitCode=143 Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351478 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736"} Feb 17 16:17:36 crc kubenswrapper[4829]: I0217 16:17:36.351493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8"} Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.348778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.442689 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:17:40 crc kubenswrapper[4829]: I0217 16:17:40.442976 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" containerID="cri-o://111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" gracePeriod=10 Feb 17 16:17:41 crc kubenswrapper[4829]: I0217 16:17:41.421361 4829 generic.go:334] "Generic (PLEG): container finished" podID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerID="111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" exitCode=0 Feb 17 16:17:41 crc kubenswrapper[4829]: I0217 16:17:41.421410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2"} Feb 17 16:17:45 crc kubenswrapper[4829]: I0217 16:17:45.247467 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.831498 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.832519 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-8s649_openstack(8ff4740d-5b36-4273-be02-50bec771e157): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:17:47 crc kubenswrapper[4829]: E0217 16:17:47.834559 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-8s649" podUID="8ff4740d-5b36-4273-be02-50bec771e157" Feb 17 16:17:48 crc kubenswrapper[4829]: E0217 16:17:48.515804 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-8s649" podUID="8ff4740d-5b36-4273-be02-50bec771e157" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.247022 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.568094 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f87ae24-e966-4385-8a84-cb66b14cd28b","Type":"ContainerDied","Data":"840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571"} Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.568691 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840066b375faf3873be3546fcf985f3d811a4958146207294fafd47abd688571" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.569061 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.578132 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7l7pb" event={"ID":"3a50b549-2eb5-4bfa-8f1d-3b862974ceed","Type":"ContainerDied","Data":"d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf"} Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.578177 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40e1a97a46355432b1b8637bc6ad66252de0c2e0bf8670bbfb8c824f61119cf" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.581494 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.667959 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669294 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669322 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669408 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669425 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669472 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669529 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.669555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670284 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs" (OuterVolumeSpecName: "logs") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670668 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670773 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"1f87ae24-e966-4385-8a84-cb66b14cd28b\" (UID: \"1f87ae24-e966-4385-8a84-cb66b14cd28b\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.670807 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") pod \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\" (UID: \"3a50b549-2eb5-4bfa-8f1d-3b862974ceed\") " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.671901 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.677360 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.681624 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.681783 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq" (OuterVolumeSpecName: "kube-api-access-pwxrq") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "kube-api-access-pwxrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.690653 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts" (OuterVolumeSpecName: "scripts") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.690791 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.705984 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts" (OuterVolumeSpecName: "scripts") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.709008 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l" (OuterVolumeSpecName: "kube-api-access-pd45l") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "kube-api-access-pd45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.716729 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.719291 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (OuterVolumeSpecName: "glance") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "pvc-60154460-e4e5-447b-9d26-02e14a9d8490". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.728995 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data" (OuterVolumeSpecName: "config-data") pod "3a50b549-2eb5-4bfa-8f1d-3b862974ceed" (UID: "3a50b549-2eb5-4bfa-8f1d-3b862974ceed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.769001 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data" (OuterVolumeSpecName: "config-data") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773842 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773870 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f87ae24-e966-4385-8a84-cb66b14cd28b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773880 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773889 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd45l\" (UniqueName: \"kubernetes.io/projected/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-kube-api-access-pd45l\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773918 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" " Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773929 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773938 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773945 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773954 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773961 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwxrq\" (UniqueName: \"kubernetes.io/projected/1f87ae24-e966-4385-8a84-cb66b14cd28b-kube-api-access-pwxrq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.773970 4829 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a50b549-2eb5-4bfa-8f1d-3b862974ceed-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.774194 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.783337 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f87ae24-e966-4385-8a84-cb66b14cd28b" (UID: "1f87ae24-e966-4385-8a84-cb66b14cd28b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.824406 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.824673 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490") on node "crc" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876371 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876788 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:50 crc kubenswrapper[4829]: I0217 16:17:50.876804 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f87ae24-e966-4385-8a84-cb66b14cd28b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.586418 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.586451 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7l7pb" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.646967 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.657047 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.677317 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.697596 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698331 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698366 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698432 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698452 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698471 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698488 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: E0217 16:17:51.698508 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698519 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698881 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab5e213-ae02-408f-98ef-9ed6ecf2a1db" containerName="init" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698945 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-log" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698960 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" containerName="keystone-bootstrap" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.698977 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" containerName="glance-httpd" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.705377 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.707807 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.708472 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.716165 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7l7pb"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.728412 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.770401 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.771988 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.776930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.776968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777450 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777790 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.777910 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.784593 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.816882 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.816996 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817136 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817163 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817194 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.817469 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.920487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921158 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921348 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921373 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921417 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921437 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921466 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.921656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.922697 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.922996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.926278 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.926323 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.930752 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.933620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.940473 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.943743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.944630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:51 crc kubenswrapper[4829]: I0217 16:17:51.993507 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " pod="openstack/glance-default-external-api-0" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.023884 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024035 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024052 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.024215 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.030695 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.172265 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.173337 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.173893 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.174245 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.175964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.178080 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"keystone-bootstrap-tpsml\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.313554 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f87ae24-e966-4385-8a84-cb66b14cd28b" path="/var/lib/kubelet/pods/1f87ae24-e966-4385-8a84-cb66b14cd28b/volumes" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.341296 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a50b549-2eb5-4bfa-8f1d-3b862974ceed" path="/var/lib/kubelet/pods/3a50b549-2eb5-4bfa-8f1d-3b862974ceed/volumes" Feb 17 16:17:52 crc kubenswrapper[4829]: I0217 16:17:52.399331 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.519007 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600449 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600487 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600524 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600728 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") pod \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\" (UID: \"bb920f32-c8e7-45d7-8c19-40ae485d7c2f\") " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.600922 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601274 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs" (OuterVolumeSpecName: "logs") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601513 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.601525 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.608257 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb" (OuterVolumeSpecName: "kube-api-access-t29jb") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "kube-api-access-t29jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.621732 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts" (OuterVolumeSpecName: "scripts") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.635397 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (OuterVolumeSpecName: "glance") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.639583 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.662803 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.667865 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data" (OuterVolumeSpecName: "config-data") pod "bb920f32-c8e7-45d7-8c19-40ae485d7c2f" (UID: "bb920f32-c8e7-45d7-8c19-40ae485d7c2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb920f32-c8e7-45d7-8c19-40ae485d7c2f","Type":"ContainerDied","Data":"f430054f71a01f11b604f3f8ded31a8473f6ca27f025c34b842bd52c7bf70ac6"} Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692652 4829 scope.go:117] "RemoveContainer" containerID="6f21c8542efceb0bfdd90c214eebef28fbcb045b304d3c433cb4d47a29e9a62f" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.692799 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704201 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704241 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t29jb\" (UniqueName: \"kubernetes.io/projected/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-kube-api-access-t29jb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704263 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704283 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704299 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb920f32-c8e7-45d7-8c19-40ae485d7c2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.704356 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" " Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.736547 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.736752 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537") on node "crc" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.776947 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.796495 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.808903 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.826944 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: E0217 16:17:59.827484 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827499 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: E0217 16:17:59.827513 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827518 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827738 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-httpd" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.827751 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" containerName="glance-log" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.828863 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.831804 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.831901 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.850007 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910654 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910758 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910854 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910938 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:17:59 crc kubenswrapper[4829]: I0217 16:17:59.910963 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013219 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013272 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013420 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013452 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013476 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013726 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.013888 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.016079 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.016339 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.018773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.019843 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.020630 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.029347 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.032021 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.063109 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.167125 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.246977 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.247641 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:00 crc kubenswrapper[4829]: I0217 16:18:00.298195 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb920f32-c8e7-45d7-8c19-40ae485d7c2f" path="/var/lib/kubelet/pods/bb920f32-c8e7-45d7-8c19-40ae485d7c2f/volumes" Feb 17 16:18:03 crc kubenswrapper[4829]: I0217 16:18:03.758674 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.813081 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.813408 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lrq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xh926_openstack(7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:18:03 crc kubenswrapper[4829]: E0217 16:18:03.815352 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xh926" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" Feb 17 16:18:03 crc kubenswrapper[4829]: I0217 16:18:03.914226 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014434 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.014910 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015010 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.015109 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.024191 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k" (OuterVolumeSpecName: "kube-api-access-xwc8k") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "kube-api-access-xwc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.081324 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.083886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config" (OuterVolumeSpecName: "config") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.086670 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.107298 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117533 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") pod \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\" (UID: \"9b4eb784-8c4c-4875-ae8f-e8882eb9989f\") " Feb 17 16:18:04 crc kubenswrapper[4829]: W0217 16:18:04.117683 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9b4eb784-8c4c-4875-ae8f-e8882eb9989f/volumes/kubernetes.io~configmap/dns-svc Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.117716 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b4eb784-8c4c-4875-ae8f-e8882eb9989f" (UID: "9b4eb784-8c4c-4875-ae8f-e8882eb9989f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118170 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwc8k\" (UniqueName: \"kubernetes.io/projected/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-kube-api-access-xwc8k\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118192 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118201 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118211 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118222 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.118230 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4eb784-8c4c-4875-ae8f-e8882eb9989f-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.760340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-k8994" event={"ID":"9b4eb784-8c4c-4875-ae8f-e8882eb9989f","Type":"ContainerDied","Data":"4f1a71803b633d03391de17f6f16604c5e107eae12d0b26db71e47dca08add20"} Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.760363 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-k8994" Feb 17 16:18:04 crc kubenswrapper[4829]: E0217 16:18:04.763502 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xh926" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.804217 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:18:04 crc kubenswrapper[4829]: I0217 16:18:04.817984 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-k8994"] Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.116556 4829 scope.go:117] "RemoveContainer" containerID="435f0a7cd9bb43d7842a9259334907bf810639b88f169bf8707a112cd5fa4886" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.136463 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.136692 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js29x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n46p8_openstack(f3d9b56f-3f6b-4fb6-af65-8f2410f60e20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.138382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n46p8" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.248034 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-895cf5cf-k8994" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.256493 4829 scope.go:117] "RemoveContainer" containerID="111e996ca2ce932ab61d3f5441aca23e08cc8a61152535009597e1974fb114d2" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.326030 4829 scope.go:117] "RemoveContainer" containerID="06b2aebf77c0658aaf0fba25fd9532c0a6fed7a28da37fccf69b1fab6c6db0bd" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.627478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.762511 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:18:05 crc kubenswrapper[4829]: W0217 16:18:05.771413 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f146bc_ed08_462a_9c4a_f5641b460469.slice/crio-c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1 WatchSource:0}: Error finding container c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1: Status 404 returned error can't find the container with id c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1 Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.774272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90"} Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.780511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerStarted","Data":"7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96"} Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.785203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerStarted","Data":"b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f"} Feb 17 16:18:05 crc kubenswrapper[4829]: E0217 16:18:05.810764 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n46p8" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" Feb 17 16:18:05 crc kubenswrapper[4829]: I0217 16:18:05.811957 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-mgkjx" podStartSLOduration=2.690367483 podStartE2EDuration="36.811936728s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:30.98032115 +0000 UTC m=+1363.397339118" lastFinishedPulling="2026-02-17 16:18:05.101890375 +0000 UTC m=+1397.518908363" observedRunningTime="2026-02-17 16:18:05.799327137 +0000 UTC m=+1398.216345125" watchObservedRunningTime="2026-02-17 16:18:05.811936728 +0000 UTC m=+1398.228954706" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.295051 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" path="/var/lib/kubelet/pods/9b4eb784-8c4c-4875-ae8f-e8882eb9989f/volumes" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.697417 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.816243 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.816284 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.818222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerStarted","Data":"0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.821461 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerStarted","Data":"3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6"} Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.843825 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8s649" podStartSLOduration=3.515843442 podStartE2EDuration="37.84380678s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.329216521 +0000 UTC m=+1363.746234499" lastFinishedPulling="2026-02-17 16:18:05.657179859 +0000 UTC m=+1398.074197837" observedRunningTime="2026-02-17 16:18:06.836954615 +0000 UTC m=+1399.253972593" watchObservedRunningTime="2026-02-17 16:18:06.84380678 +0000 UTC m=+1399.260824748" Feb 17 16:18:06 crc kubenswrapper[4829]: I0217 16:18:06.857552 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tpsml" podStartSLOduration=15.85753647 podStartE2EDuration="15.85753647s" podCreationTimestamp="2026-02-17 16:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:06.85346091 +0000 UTC m=+1399.270478888" watchObservedRunningTime="2026-02-17 16:18:06.85753647 +0000 UTC m=+1399.274554448" Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.834680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.840043 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerStarted","Data":"53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.845083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.845129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"26df09ac78a076eb0f2fab2e97427288c9dbe4295d421971b90f039ccad0b50a"} Feb 17 16:18:07 crc kubenswrapper[4829]: I0217 16:18:07.868610 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.868573889 podStartE2EDuration="16.868573889s" podCreationTimestamp="2026-02-17 16:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:07.863814601 +0000 UTC m=+1400.280832589" watchObservedRunningTime="2026-02-17 16:18:07.868573889 +0000 UTC m=+1400.285591867" Feb 17 16:18:08 crc kubenswrapper[4829]: I0217 16:18:08.865389 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerStarted","Data":"40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9"} Feb 17 16:18:08 crc kubenswrapper[4829]: I0217 16:18:08.906763 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.906738181 podStartE2EDuration="9.906738181s" podCreationTimestamp="2026-02-17 16:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:08.898754646 +0000 UTC m=+1401.315772624" watchObservedRunningTime="2026-02-17 16:18:08.906738181 +0000 UTC m=+1401.323756159" Feb 17 16:18:09 crc kubenswrapper[4829]: I0217 16:18:09.882785 4829 generic.go:334] "Generic (PLEG): container finished" podID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerID="3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6" exitCode=0 Feb 17 16:18:09 crc kubenswrapper[4829]: I0217 16:18:09.882943 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerDied","Data":"3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6"} Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.168857 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.168907 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.212235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.213064 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902655 4829 generic.go:334] "Generic (PLEG): container finished" podID="8ff4740d-5b36-4273-be02-50bec771e157" containerID="0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2" exitCode=0 Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerDied","Data":"0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2"} Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.902925 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:10 crc kubenswrapper[4829]: I0217 16:18:10.903230 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:11 crc kubenswrapper[4829]: I0217 16:18:11.913886 4829 generic.go:334] "Generic (PLEG): container finished" podID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerID="1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115" exitCode=0 Feb 17 16:18:11 crc kubenswrapper[4829]: I0217 16:18:11.913952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerDied","Data":"1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115"} Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.032224 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.032280 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.091265 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.096878 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.929307 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:18:12 crc kubenswrapper[4829]: I0217 16:18:12.929339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.790138 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.798175 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.804056 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869402 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869478 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869521 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869636 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869720 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869766 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") pod \"f8202be9-bbed-45eb-80af-de3018eb6ce2\" (UID: \"f8202be9-bbed-45eb-80af-de3018eb6ce2\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869802 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869822 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869893 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869921 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") pod \"8ff4740d-5b36-4273-be02-50bec771e157\" (UID: \"8ff4740d-5b36-4273-be02-50bec771e157\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869940 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.869995 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") pod \"acebba68-0142-4d4e-be34-e31a6ccb8722\" (UID: \"acebba68-0142-4d4e-be34-e31a6ccb8722\") " Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.876930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.880042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts" (OuterVolumeSpecName: "scripts") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.880858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h" (OuterVolumeSpecName: "kube-api-access-24h9h") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "kube-api-access-24h9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.881709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs" (OuterVolumeSpecName: "logs") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.881994 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br" (OuterVolumeSpecName: "kube-api-access-lj6br") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "kube-api-access-lj6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.884307 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg" (OuterVolumeSpecName: "kube-api-access-vkjbg") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "kube-api-access-vkjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.885720 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts" (OuterVolumeSpecName: "scripts") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.897229 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.915090 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config" (OuterVolumeSpecName: "config") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.923908 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data" (OuterVolumeSpecName: "config-data") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.926417 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data" (OuterVolumeSpecName: "config-data") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.929140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8202be9-bbed-45eb-80af-de3018eb6ce2" (UID: "f8202be9-bbed-45eb-80af-de3018eb6ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.942799 4829 generic.go:334] "Generic (PLEG): container finished" podID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerID="7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96" exitCode=0 Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.942848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerDied","Data":"7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945002 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tpsml" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945011 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tpsml" event={"ID":"acebba68-0142-4d4e-be34-e31a6ccb8722","Type":"ContainerDied","Data":"b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.945052 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b964f677bdd3c029e3b92151f81d08bf775d4134833dad52c3242620cf64687f" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jrh5n" event={"ID":"f8202be9-bbed-45eb-80af-de3018eb6ce2","Type":"ContainerDied","Data":"7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946490 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb193b16f3184c91798dca7106e8099cdc118d454f70fee0e39704d5dfc4f83" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.946525 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jrh5n" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.949043 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ff4740d-5b36-4273-be02-50bec771e157" (UID: "8ff4740d-5b36-4273-be02-50bec771e157"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.955391 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acebba68-0142-4d4e-be34-e31a6ccb8722" (UID: "acebba68-0142-4d4e-be34-e31a6ccb8722"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.956215 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s649" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.964565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s649" event={"ID":"8ff4740d-5b36-4273-be02-50bec771e157","Type":"ContainerDied","Data":"d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365"} Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.964647 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b8c9e9d29cdf8e65094fc8b5fb89d84b97306be2a3ef92cb85b6ed9fc60365" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975123 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff4740d-5b36-4273-be02-50bec771e157-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975188 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975217 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975238 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkjbg\" (UniqueName: \"kubernetes.io/projected/8ff4740d-5b36-4273-be02-50bec771e157-kube-api-access-vkjbg\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975255 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975270 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975285 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24h9h\" (UniqueName: \"kubernetes.io/projected/f8202be9-bbed-45eb-80af-de3018eb6ce2-kube-api-access-24h9h\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975301 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975318 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.975637 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj6br\" (UniqueName: \"kubernetes.io/projected/acebba68-0142-4d4e-be34-e31a6ccb8722-kube-api-access-lj6br\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976089 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ff4740d-5b36-4273-be02-50bec771e157-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976098 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976107 4829 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/acebba68-0142-4d4e-be34-e31a6ccb8722-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:13 crc kubenswrapper[4829]: I0217 16:18:13.976116 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8202be9-bbed-45eb-80af-de3018eb6ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.206434 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.206989 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207009 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207029 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207038 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207050 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207058 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207074 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207082 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: E0217 16:18:14.207098 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="init" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207109 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="init" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207416 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff4740d-5b36-4273-be02-50bec771e157" containerName="placement-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207440 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4eb784-8c4c-4875-ae8f-e8882eb9989f" containerName="dnsmasq-dns" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207462 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" containerName="keystone-bootstrap" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.207484 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" containerName="neutron-db-sync" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.209027 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.221365 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289506 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289622 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.289663 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.351996 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.354319 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.361693 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.361999 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.362109 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.362211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pfff2" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.366788 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390760 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390823 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390843 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390919 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390950 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390972 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.390991 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391008 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391052 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391080 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.391706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.393459 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.400714 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.400883 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.417550 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"dnsmasq-dns-5ccc5c4795-rnr9j\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493575 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.493871 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.497212 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.497682 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.498927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.499851 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.513254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"neutron-b56799c5b-dmgjh\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.533081 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.688112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.970390 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.972437 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978144 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978352 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.978518 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zckpn" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979071 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979175 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:18:14 crc kubenswrapper[4829]: I0217 16:18:14.979272 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003414 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003704 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003756 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003913 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003958 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.003995 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.006467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.021211 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.053645 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0"} Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.063263 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.066186 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.070861 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p9cb5" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071145 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071238 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071397 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.071439 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.082178 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109712 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109787 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109865 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109968 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.109990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110029 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110066 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110118 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110143 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110191 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110226 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.110273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.118908 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-internal-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.136435 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-combined-ca-bundle\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.137200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-scripts\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.167693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-config-data\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171116 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-public-tls-certs\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171749 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-fernet-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.171757 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-credential-keys\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.177663 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlj6s\" (UniqueName: \"kubernetes.io/projected/c2a8da85-ca3d-4368-8a34-4db948e7f6f3-kube-api-access-zlj6s\") pod \"keystone-868ff7b66c-lx7qv\" (UID: \"c2a8da85-ca3d-4368-8a34-4db948e7f6f3\") " pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227089 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227180 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227216 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227255 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227306 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.227429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.230563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.232182 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.237445 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.239439 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.239665 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.240304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.250364 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.261213 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"placement-5c89899bcb-82htl\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.308167 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.335213 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.373735 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.375902 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.400354 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439224 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439264 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439291 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439367 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439387 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.439432 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541196 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541492 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541531 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.541549 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.543460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.543515 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.546499 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.548996 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/504197ea-58c2-445f-96a1-4b812028425d-logs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.549631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-scripts\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.553099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-internal-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.554283 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-public-tls-certs\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.554658 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-config-data\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.563263 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/504197ea-58c2-445f-96a1-4b812028425d-combined-ca-bundle\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.565748 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvkp\" (UniqueName: \"kubernetes.io/projected/504197ea-58c2-445f-96a1-4b812028425d-kube-api-access-shvkp\") pod \"placement-6b8b56fc4d-7pnvr\" (UID: \"504197ea-58c2-445f-96a1-4b812028425d\") " pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.628911 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648450 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.648883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") pod \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\" (UID: \"79d3ed60-8c68-44ec-aaa1-806b5aec5df1\") " Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.663275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx" (OuterVolumeSpecName: "kube-api-access-tzhzx") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "kube-api-access-tzhzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.694162 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:15 crc kubenswrapper[4829]: W0217 16:18:15.697628 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5 WatchSource:0}: Error finding container b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5: Status 404 returned error can't find the container with id b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5 Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.697937 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.762189 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhzx\" (UniqueName: \"kubernetes.io/projected/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-kube-api-access-tzhzx\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.791768 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:15 crc kubenswrapper[4829]: E0217 16:18:15.794605 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.794638 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.795032 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" containerName="heat-db-sync" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.797333 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.830644 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.864926 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.864962 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865123 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.865744 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.912009 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data" (OuterVolumeSpecName: "config-data") pod "79d3ed60-8c68-44ec-aaa1-806b5aec5df1" (UID: "79d3ed60-8c68-44ec-aaa1-806b5aec5df1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.958795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.958958 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.959046 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968603 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.968684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.969553 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.971861 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d3ed60-8c68-44ec-aaa1-806b5aec5df1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974475 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.974910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.976038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:15 crc kubenswrapper[4829]: I0217 16:18:15.994815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"neutron-59566c7c9b-gpfcg\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.132816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgkjx" event={"ID":"79d3ed60-8c68-44ec-aaa1-806b5aec5df1","Type":"ContainerDied","Data":"0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.133017 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ccbcb8853908fa6fc0b24f8ec4ab6546cf025168c056849c031ac8010ed9958" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.133072 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgkjx" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.156643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.157103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-868ff7b66c-lx7qv"] Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.189850 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192342 4829 generic.go:334] "Generic (PLEG): container finished" podID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerID="496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623" exitCode=0 Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192393 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.192417 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerStarted","Data":"38d0e25b8babc9cbba47e39ba8aa5d5221b3d6a4b4fa42411be271008d0092b7"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.205682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33"} Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.376800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:18:16 crc kubenswrapper[4829]: I0217 16:18:16.377032 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8b56fc4d-7pnvr"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.116784 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.251565 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.267549 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerStarted","Data":"ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.270388 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.312947 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.315082 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868ff7b66c-lx7qv" event={"ID":"c2a8da85-ca3d-4368-8a34-4db948e7f6f3","Type":"ContainerStarted","Data":"293cf971e77cfa7e607294baa6a2d1b813e217e1034d8b25d770660e55413394"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.331225 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.332884 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"99e7419feafe64980110b2189931ffa931f5a97e2e78bd4c9d2b0c71000b41c8"} Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.336309 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.342053 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" podStartSLOduration=3.342014305 podStartE2EDuration="3.342014305s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:17.296426334 +0000 UTC m=+1409.713444312" watchObservedRunningTime="2026-02-17 16:18:17.342014305 +0000 UTC m=+1409.759032293" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.402875 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.405680 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.414125 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.414561 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.421383 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.551737 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.552259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.552564 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.555956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.555994 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.556073 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.556116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658231 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658448 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658519 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.658608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.664219 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-httpd-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.664318 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-combined-ca-bundle\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.674659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-public-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.674659 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-ovndb-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.675731 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-internal-tls-certs\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.680457 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/298e03dd-93bc-4a68-8589-ecec2278efd5-config\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.681951 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqwf\" (UniqueName: \"kubernetes.io/projected/298e03dd-93bc-4a68-8589-ecec2278efd5-kube-api-access-7vqwf\") pod \"neutron-5598cc6dcc-p2b29\" (UID: \"298e03dd-93bc-4a68-8589-ecec2278efd5\") " pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:17 crc kubenswrapper[4829]: I0217 16:18:17.906230 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.368261 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-868ff7b66c-lx7qv" event={"ID":"c2a8da85-ca3d-4368-8a34-4db948e7f6f3","Type":"ContainerStarted","Data":"8096b48936ccfe75f025d4625655ea441fda4c4d7d6cc2afe71cf8d7df1d1f16"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.371695 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.383648 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerStarted","Data":"039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.383797 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" containerID="cri-o://92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" gracePeriod=30 Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.384032 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.384066 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" containerID="cri-o://039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" gracePeriod=30 Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.403726 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"3964b3018b66ff82b3ca2cedd3b20a2a9b4c48bf635ff2c298427c883ec8e0fd"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.403770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8b56fc4d-7pnvr" event={"ID":"504197ea-58c2-445f-96a1-4b812028425d","Type":"ContainerStarted","Data":"c1ab826ad101ffe475ca27f698998fe44a0abc2c600d5408ea2efc5987d8ecc6"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.404933 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.404957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.417734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.417773 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerStarted","Data":"894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.418542 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.426568 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerStarted","Data":"03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c"} Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.465620 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59566c7c9b-gpfcg" podStartSLOduration=3.465603964 podStartE2EDuration="3.465603964s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.461150344 +0000 UTC m=+1410.878168322" watchObservedRunningTime="2026-02-17 16:18:18.465603964 +0000 UTC m=+1410.882621932" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.519956 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c89899bcb-82htl" podStartSLOduration=3.5199380209999998 podStartE2EDuration="3.519938021s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.518289487 +0000 UTC m=+1410.935307465" watchObservedRunningTime="2026-02-17 16:18:18.519938021 +0000 UTC m=+1410.936955989" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.557800 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-868ff7b66c-lx7qv" podStartSLOduration=4.557775813 podStartE2EDuration="4.557775813s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.538693387 +0000 UTC m=+1410.955711365" watchObservedRunningTime="2026-02-17 16:18:18.557775813 +0000 UTC m=+1410.974793791" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.610704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5598cc6dcc-p2b29"] Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.615204 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b56799c5b-dmgjh" podStartSLOduration=4.615186293 podStartE2EDuration="4.615186293s" podCreationTimestamp="2026-02-17 16:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.568980155 +0000 UTC m=+1410.985998133" watchObservedRunningTime="2026-02-17 16:18:18.615186293 +0000 UTC m=+1411.032204271" Feb 17 16:18:18 crc kubenswrapper[4829]: I0217 16:18:18.641637 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b8b56fc4d-7pnvr" podStartSLOduration=3.641555705 podStartE2EDuration="3.641555705s" podCreationTimestamp="2026-02-17 16:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.58879282 +0000 UTC m=+1411.005810798" watchObservedRunningTime="2026-02-17 16:18:18.641555705 +0000 UTC m=+1411.058573683" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.454396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerStarted","Data":"b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.458626 4829 generic.go:334] "Generic (PLEG): container finished" podID="75783ffe-a672-4585-ae18-3c162d659ee7" containerID="039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" exitCode=0 Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.458691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.471816 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"6c93a3a441ec63ea8f746c6d191f2df358ac22c0b4d899fccc8037364ad61f88"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472074 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"4f68087d01fd3239a42bef0a703c07fabdfa9de4a1539117eb8d4c29d0d0c066"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472088 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5598cc6dcc-p2b29" event={"ID":"298e03dd-93bc-4a68-8589-ecec2278efd5","Type":"ContainerStarted","Data":"790783a5b1b8d3209886a56ceddaa256888f2baf4b645b85a1d169eec7f9c40d"} Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472971 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.472997 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.474271 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xh926" podStartSLOduration=3.038650875 podStartE2EDuration="50.474254798s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.33884914 +0000 UTC m=+1363.755867118" lastFinishedPulling="2026-02-17 16:18:18.774453063 +0000 UTC m=+1411.191471041" observedRunningTime="2026-02-17 16:18:19.467023173 +0000 UTC m=+1411.884041151" watchObservedRunningTime="2026-02-17 16:18:19.474254798 +0000 UTC m=+1411.891272776" Feb 17 16:18:19 crc kubenswrapper[4829]: I0217 16:18:19.495205 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5598cc6dcc-p2b29" podStartSLOduration=2.495183933 podStartE2EDuration="2.495183933s" podCreationTimestamp="2026-02-17 16:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:19.48949539 +0000 UTC m=+1411.906513368" watchObservedRunningTime="2026-02-17 16:18:19.495183933 +0000 UTC m=+1411.912201911" Feb 17 16:18:20 crc kubenswrapper[4829]: I0217 16:18:20.487572 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:21 crc kubenswrapper[4829]: I0217 16:18:21.498640 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerStarted","Data":"e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6"} Feb 17 16:18:21 crc kubenswrapper[4829]: I0217 16:18:21.517589 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n46p8" podStartSLOduration=3.809259693 podStartE2EDuration="52.51755162s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.385716826 +0000 UTC m=+1363.802734804" lastFinishedPulling="2026-02-17 16:18:20.094008753 +0000 UTC m=+1412.511026731" observedRunningTime="2026-02-17 16:18:21.516142452 +0000 UTC m=+1413.933160440" watchObservedRunningTime="2026-02-17 16:18:21.51755162 +0000 UTC m=+1413.934569608" Feb 17 16:18:22 crc kubenswrapper[4829]: I0217 16:18:22.531473 4829 generic.go:334] "Generic (PLEG): container finished" podID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerID="b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc" exitCode=0 Feb 17 16:18:22 crc kubenswrapper[4829]: I0217 16:18:22.531583 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerDied","Data":"b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc"} Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.351349 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421352 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.421626 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") pod \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\" (UID: \"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e\") " Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.428999 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7" (OuterVolumeSpecName: "kube-api-access-8lrq7") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "kube-api-access-8lrq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.448553 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.465360 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" (UID: "7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523829 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523865 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.523874 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lrq7\" (UniqueName: \"kubernetes.io/projected/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e-kube-api-access-8lrq7\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.534469 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552331 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xh926" event={"ID":"7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e","Type":"ContainerDied","Data":"c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8"} Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552368 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cb2064650d57eadb391ddc32b0fcab3cecb6461143054a112467689fa1e4f8" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.552380 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xh926" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.628585 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.628863 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" containerID="cri-o://4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" gracePeriod=10 Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.822185 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:24 crc kubenswrapper[4829]: E0217 16:18:24.822914 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.822935 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.823146 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" containerName="barbican-db-sync" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.824380 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.826789 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-68q4f" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.827112 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.830483 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.854345 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.856350 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.862989 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.894985 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944413 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944508 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944610 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944636 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944679 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944703 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944760 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944806 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.944830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.945050 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.992261 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:24 crc kubenswrapper[4829]: I0217 16:18:24.994367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.015113 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048372 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048414 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048435 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048836 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048863 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048939 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.048982 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049035 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.049053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.073360 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87043d23-60bf-443c-8db4-2679d7269f6c-logs\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.084623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.086418 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.089382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f483139-9fb6-4db6-8c40-846d8bd69556-logs\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.100218 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-combined-ca-bundle\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.100485 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.103123 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxvfj\" (UniqueName: \"kubernetes.io/projected/5f483139-9fb6-4db6-8c40-846d8bd69556-kube-api-access-lxvfj\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.112646 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.117391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rft\" (UniqueName: \"kubernetes.io/projected/87043d23-60bf-443c-8db4-2679d7269f6c-kube-api-access-h6rft\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.119422 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data-custom\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.123655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.124143 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-combined-ca-bundle\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.127802 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f483139-9fb6-4db6-8c40-846d8bd69556-config-data\") pod \"barbican-keystone-listener-55b9b6dfd6-gq6hn\" (UID: \"5f483139-9fb6-4db6-8c40-846d8bd69556\") " pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.129392 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87043d23-60bf-443c-8db4-2679d7269f6c-config-data-custom\") pod \"barbican-worker-765797c7c9-2cts6\" (UID: \"87043d23-60bf-443c-8db4-2679d7269f6c\") " pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158240 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158332 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158375 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158668 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158704 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158743 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158815 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.158858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.159463 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.163065 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171182 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171515 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.171529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.184115 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765797c7c9-2cts6" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.194711 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.197038 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"dnsmasq-dns-688c87cc99-f5k27\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265121 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265429 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265450 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.265558 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.273764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.277056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.277481 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.281165 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.304152 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"barbican-api-5cb4f96fd4-bmlr5\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.335227 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.357817 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.467269 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.566144 4829 generic.go:334] "Generic (PLEG): container finished" podID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerID="4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" exitCode=0 Feb 17 16:18:25 crc kubenswrapper[4829]: I0217 16:18:25.566199 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.186849 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293438 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293481 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293654 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.293728 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") pod \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\" (UID: \"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb\") " Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.312277 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p" (OuterVolumeSpecName: "kube-api-access-rg66p") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "kube-api-access-rg66p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.398485 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.405904 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg66p\" (UniqueName: \"kubernetes.io/projected/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-kube-api-access-rg66p\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.411127 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.415433 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.420595 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.424834 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.437753 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config" (OuterVolumeSpecName: "config") pod "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" (UID: "52f82bf7-41c8-4c20-a149-83fbbc2d3bfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517210 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517469 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517478 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.517487 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.553249 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" event={"ID":"52f82bf7-41c8-4c20-a149-83fbbc2d3bfb","Type":"ContainerDied","Data":"de029d86f193dd1c04a644dfbce66d4d5a98f68124c1549de6eaa99d3eb1caa6"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580063 4829 scope.go:117] "RemoveContainer" containerID="4343738b8411a46e31351c7fa7f2a56b9dd16712a92092fb526ad177c7123485" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.580196 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-kjjvn" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.583162 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerStarted","Data":"62bf9e0fd2a55d71204acfd621962b635d4b2d6d5394b119cd1c1782a276bc21"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerStarted","Data":"bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816"} Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587558 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" containerID="cri-o://9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.587838 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588102 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" containerID="cri-o://bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588149 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" containerID="cri-o://2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.588184 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" containerID="cri-o://4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" gracePeriod=30 Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.628083 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.39316978 podStartE2EDuration="57.628066002s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="2026-02-17 16:17:31.840323232 +0000 UTC m=+1364.257341210" lastFinishedPulling="2026-02-17 16:18:26.075219454 +0000 UTC m=+1418.492237432" observedRunningTime="2026-02-17 16:18:26.606955142 +0000 UTC m=+1419.023973120" watchObservedRunningTime="2026-02-17 16:18:26.628066002 +0000 UTC m=+1419.045083980" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.633040 4829 scope.go:117] "RemoveContainer" containerID="1a8920e9d77dd167c9af1a97ad397e1247c02a3dd5e84362fb2e9905e9b36a06" Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.688276 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:26 crc kubenswrapper[4829]: I0217 16:18:26.716756 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-kjjvn"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.015037 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.034539 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.045149 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765797c7c9-2cts6"] Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.601880 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"544293b5af95509fff3676402e367e7f68e9f514d3e3ad411d8004de6b4de9e6"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615525 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" exitCode=2 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615554 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615561 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615635 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615662 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.615680 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.623189 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"e7113f27d1b432f6c47123480b460d226a2414586cf047a6acf509c9bb1d2e5e"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630333 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630346 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerStarted","Data":"550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630598 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.630640 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.638115 4829 generic.go:334] "Generic (PLEG): container finished" podID="1665c777-7859-4f39-a063-275485b6321c" containerID="a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.638159 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0"} Feb 17 16:18:27 crc kubenswrapper[4829]: I0217 16:18:27.654681 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podStartSLOduration=2.654660891 podStartE2EDuration="2.654660891s" podCreationTimestamp="2026-02-17 16:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:27.653975052 +0000 UTC m=+1420.070993060" watchObservedRunningTime="2026-02-17 16:18:27.654660891 +0000 UTC m=+1420.071678879" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.261123 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:28 crc kubenswrapper[4829]: E0217 16:18:28.262145 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262169 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: E0217 16:18:28.262225 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="init" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262235 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="init" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.262546 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" containerName="dnsmasq-dns" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.264275 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.266997 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.267288 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.303421 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f82bf7-41c8-4c20-a149-83fbbc2d3bfb" path="/var/lib/kubelet/pods/52f82bf7-41c8-4c20-a149-83fbbc2d3bfb/volumes" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.304074 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376724 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376784 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376914 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.376961 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.377018 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.377070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479300 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.479945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480095 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/652438ae-668e-4017-a88c-c6737fd0db78-logs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.480504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.481063 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.485006 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-combined-ca-bundle\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.485044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data-custom\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.486391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-internal-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.493078 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-public-tls-certs\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.495785 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2b5m\" (UniqueName: \"kubernetes.io/projected/652438ae-668e-4017-a88c-c6737fd0db78-kube-api-access-b2b5m\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.497382 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/652438ae-668e-4017-a88c-c6737fd0db78-config-data\") pod \"barbican-api-744588c6bd-fsx8x\" (UID: \"652438ae-668e-4017-a88c-c6737fd0db78\") " pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.625105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.651072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerStarted","Data":"faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173"} Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.651211 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.653855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerDied","Data":"e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6"} Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.654268 4829 generic.go:334] "Generic (PLEG): container finished" podID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerID="e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6" exitCode=0 Feb 17 16:18:28 crc kubenswrapper[4829]: I0217 16:18:28.679887 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" podStartSLOduration=4.679867822 podStartE2EDuration="4.679867822s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:28.669283807 +0000 UTC m=+1421.086301785" watchObservedRunningTime="2026-02-17 16:18:28.679867822 +0000 UTC m=+1421.096885790" Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.572285 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-744588c6bd-fsx8x"] Feb 17 16:18:29 crc kubenswrapper[4829]: W0217 16:18:29.587336 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod652438ae_668e_4017_a88c_c6737fd0db78.slice/crio-e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1 WatchSource:0}: Error finding container e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1: Status 404 returned error can't find the container with id e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1 Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.669965 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"eb3b40e87ffac66715998434cf10dc5fc9dcbf85032c3f8e07aef7c8d4a2a0b6"} Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.674099 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"82ebfe753beefc9f7891ec2ff2758c732af241abd532751ccfedd636aa50a2f0"} Feb 17 16:18:29 crc kubenswrapper[4829]: I0217 16:18:29.676294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"e66240a88687f3be7c8f203ceceeb43f8fa140dd44504f6892c675f92f9f16c1"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.039866 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125545 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125638 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125704 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125880 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.125971 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.126052 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") pod \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\" (UID: \"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20\") " Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.131675 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.135992 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts" (OuterVolumeSpecName: "scripts") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.136083 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.136089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x" (OuterVolumeSpecName: "kube-api-access-js29x") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "kube-api-access-js29x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.173353 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.213693 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data" (OuterVolumeSpecName: "config-data") pod "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" (UID: "f3d9b56f-3f6b-4fb6-af65-8f2410f60e20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228218 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228251 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228263 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js29x\" (UniqueName: \"kubernetes.io/projected/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-kube-api-access-js29x\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228271 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228279 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.228289 4829 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.690301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" event={"ID":"5f483139-9fb6-4db6-8c40-846d8bd69556","Type":"ContainerStarted","Data":"c2cc487209d11dd5958d6dcb029007ec83eaf2645cbae4205326dabe14bcc186"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"2e961bc610251c1ba1fa6161ac0bdfac9cfdd30ee02b2dd2de841f591598872c"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693613 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693633 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.693643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-744588c6bd-fsx8x" event={"ID":"652438ae-668e-4017-a88c-c6737fd0db78","Type":"ContainerStarted","Data":"af70644eb88d7fe0e69e15f4389b7136078e0535542f662edd9ae2d09fbfb118"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.696926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765797c7c9-2cts6" event={"ID":"87043d23-60bf-443c-8db4-2679d7269f6c","Type":"ContainerStarted","Data":"4639261727b0d8cf3bc0404bc0629163a34a5a1de1a0b8aacb6866651c8d1fbc"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699180 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n46p8" event={"ID":"f3d9b56f-3f6b-4fb6-af65-8f2410f60e20","Type":"ContainerDied","Data":"8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b"} Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699223 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf69fea4f9234293be27d594f89648e53ae3bfd3372517552a2706b42fc667b" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.699223 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n46p8" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.739896 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-55b9b6dfd6-gq6hn" podStartSLOduration=4.666176773 podStartE2EDuration="6.739873724s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="2026-02-17 16:18:27.022107001 +0000 UTC m=+1419.439124979" lastFinishedPulling="2026-02-17 16:18:29.095803962 +0000 UTC m=+1421.512821930" observedRunningTime="2026-02-17 16:18:30.725020944 +0000 UTC m=+1423.142038932" watchObservedRunningTime="2026-02-17 16:18:30.739873724 +0000 UTC m=+1423.156891712" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.772669 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-744588c6bd-fsx8x" podStartSLOduration=2.77081679 podStartE2EDuration="2.77081679s" podCreationTimestamp="2026-02-17 16:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:30.749845935 +0000 UTC m=+1423.166863913" watchObservedRunningTime="2026-02-17 16:18:30.77081679 +0000 UTC m=+1423.187834768" Feb 17 16:18:30 crc kubenswrapper[4829]: I0217 16:18:30.784237 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-765797c7c9-2cts6" podStartSLOduration=4.707947841 podStartE2EDuration="6.784220562s" podCreationTimestamp="2026-02-17 16:18:24 +0000 UTC" firstStartedPulling="2026-02-17 16:18:27.029535681 +0000 UTC m=+1419.446553659" lastFinishedPulling="2026-02-17 16:18:29.105808402 +0000 UTC m=+1421.522826380" observedRunningTime="2026-02-17 16:18:30.776190975 +0000 UTC m=+1423.193208953" watchObservedRunningTime="2026-02-17 16:18:30.784220562 +0000 UTC m=+1423.201238540" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.019247 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: E0217 16:18:31.019794 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.019813 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.020039 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" containerName="cinder-db-sync" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.026979 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.029115 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8kvfc" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.031005 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.033950 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.034268 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.038561 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049178 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049377 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049394 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.049421 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.099102 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.099310 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" containerID="cri-o://faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" gracePeriod=10 Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.139400 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.141978 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155727 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155824 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155867 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155885 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155903 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155932 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155951 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.155978 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156042 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156055 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.156844 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.157305 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.167932 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.168692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.168769 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.171119 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.200210 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"cinder-scheduler-0\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.277854 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280203 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280399 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280608 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280703 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.280779 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.281746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.295816 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.296760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.297434 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.298211 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.335545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"dnsmasq-dns-6bb4fc677f-5skss\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.376094 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.497412 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.514068 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.520211 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.533531 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.601632 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.711862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.711909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712255 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712286 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712406 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.712490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.769181 4829 generic.go:334] "Generic (PLEG): container finished" podID="1665c777-7859-4f39-a063-275485b6321c" containerID="faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" exitCode=0 Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.770307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173"} Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.814944 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815249 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815274 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815401 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815447 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815469 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815534 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.815631 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.816322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.824556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.833449 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.833696 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.834170 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.841981 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"cinder-api-0\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " pod="openstack/cinder-api-0" Feb 17 16:18:31 crc kubenswrapper[4829]: I0217 16:18:31.846555 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.019830 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027513 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027561 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027625 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027709 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027834 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.027911 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") pod \"1665c777-7859-4f39-a063-275485b6321c\" (UID: \"1665c777-7859-4f39-a063-275485b6321c\") " Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.035730 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6" (OuterVolumeSpecName: "kube-api-access-2v2m6") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "kube-api-access-2v2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.112428 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.131052 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.131994 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v2m6\" (UniqueName: \"kubernetes.io/projected/1665c777-7859-4f39-a063-275485b6321c-kube-api-access-2v2m6\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.132113 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.132222 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.151168 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.173106 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config" (OuterVolumeSpecName: "config") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.179134 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1665c777-7859-4f39-a063-275485b6321c" (UID: "1665c777-7859-4f39-a063-275485b6321c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.221274 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233807 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233837 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.233848 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1665c777-7859-4f39-a063-275485b6321c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.424650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.629695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:32 crc kubenswrapper[4829]: W0217 16:18:32.633992 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0 WatchSource:0}: Error finding container af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0: Status 404 returned error can't find the container with id af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0 Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.787230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.789681 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerStarted","Data":"25c76158cbbd089e89beb231349a135df7ab735e2a004c66b802c8527397a342"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794176 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" event={"ID":"1665c777-7859-4f39-a063-275485b6321c","Type":"ContainerDied","Data":"62bf9e0fd2a55d71204acfd621962b635d4b2d6d5394b119cd1c1782a276bc21"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794258 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-f5k27" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.794411 4829 scope.go:117] "RemoveContainer" containerID="faea73a2be30095695c47040cd4b56aa7a4c4d8b9d01c75acd18d699b71fc173" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.799147 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0"} Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.841618 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.859032 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-f5k27"] Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.878900 4829 scope.go:117] "RemoveContainer" containerID="a3b874a62b960074941b27e92bd34f265f499b4399e91be9dd72d60b2f13a9a0" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.947830 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:32 crc kubenswrapper[4829]: E0217 16:18:32.948380 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="init" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948403 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="init" Feb 17 16:18:32 crc kubenswrapper[4829]: E0217 16:18:32.948428 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948437 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.948754 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1665c777-7859-4f39-a063-275485b6321c" containerName="dnsmasq-dns" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.952374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:32 crc kubenswrapper[4829]: I0217 16:18:32.986642 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.051560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.154857 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.155336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.155680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.181117 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"community-operators-jpmqj\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.281396 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.570650 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.816056 4829 generic.go:334] "Generic (PLEG): container finished" podID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" exitCode=0 Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.816092 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022"} Feb 17 16:18:33 crc kubenswrapper[4829]: I0217 16:18:33.923556 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.005659 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.045081 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: W0217 16:18:34.045642 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb993f64_fe54_4fed_9aca_68e11a71eee7.slice/crio-0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086 WatchSource:0}: Error finding container 0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086: Status 404 returned error can't find the container with id 0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.299143 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1665c777-7859-4f39-a063-275485b6321c" path="/var/lib/kubelet/pods/1665c777-7859-4f39-a063-275485b6321c/volumes" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.857720 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858287 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerStarted","Data":"7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858411 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" containerID="cri-o://7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" gracePeriod=30 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858664 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.858860 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" containerID="cri-o://98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" gracePeriod=30 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869702 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7" exitCode=0 Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869878 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.869926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.882863 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.895062 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.89504526 podStartE2EDuration="3.89504526s" podCreationTimestamp="2026-02-17 16:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:34.880516888 +0000 UTC m=+1427.297534866" watchObservedRunningTime="2026-02-17 16:18:34.89504526 +0000 UTC m=+1427.312063238" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.898920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerStarted","Data":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.899036 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:34 crc kubenswrapper[4829]: I0217 16:18:34.930847 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" podStartSLOduration=3.930829246 podStartE2EDuration="3.930829246s" podCreationTimestamp="2026-02-17 16:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:34.922081309 +0000 UTC m=+1427.339099287" watchObservedRunningTime="2026-02-17 16:18:34.930829246 +0000 UTC m=+1427.347847224" Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.927252 4829 generic.go:334] "Generic (PLEG): container finished" podID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerID="7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" exitCode=143 Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.927520 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499"} Feb 17 16:18:35 crc kubenswrapper[4829]: I0217 16:18:35.958703 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerStarted","Data":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.376744 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.733186 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.439347586 podStartE2EDuration="6.733169271s" podCreationTimestamp="2026-02-17 16:18:30 +0000 UTC" firstStartedPulling="2026-02-17 16:18:32.21777029 +0000 UTC m=+1424.634788268" lastFinishedPulling="2026-02-17 16:18:33.511591975 +0000 UTC m=+1425.928609953" observedRunningTime="2026-02-17 16:18:36.009903752 +0000 UTC m=+1428.426921730" watchObservedRunningTime="2026-02-17 16:18:36.733169271 +0000 UTC m=+1429.150187249" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.738813 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.742842 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.764204 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859437 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.859521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961546 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.961734 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.962521 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.962693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.978391 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357"} Feb 17 16:18:36 crc kubenswrapper[4829]: I0217 16:18:36.988680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"redhat-marketplace-g92l5\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.101303 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.730187 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.733759 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.753361 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782217 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782461 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.782511 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885039 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885081 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.885713 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.886310 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.894347 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.905378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"redhat-operators-74rcl\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:37 crc kubenswrapper[4829]: W0217 16:18:37.915170 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd8f257_bfbb_4393_b0b3_f1c955a73e05.slice/crio-8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61 WatchSource:0}: Error finding container 8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61: Status 404 returned error can't find the container with id 8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61 Feb 17 16:18:37 crc kubenswrapper[4829]: I0217 16:18:37.999805 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerStarted","Data":"8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61"} Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.013395 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357" exitCode=0 Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.013475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357"} Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.103420 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.659645 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:38 crc kubenswrapper[4829]: I0217 16:18:38.785985 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.038463 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e" exitCode=0 Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.038554 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.051052 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.051096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.055663 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerStarted","Data":"c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6"} Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.141151 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jpmqj" podStartSLOduration=3.534116666 podStartE2EDuration="7.14113322s" podCreationTimestamp="2026-02-17 16:18:32 +0000 UTC" firstStartedPulling="2026-02-17 16:18:34.873759815 +0000 UTC m=+1427.290777793" lastFinishedPulling="2026-02-17 16:18:38.480776369 +0000 UTC m=+1430.897794347" observedRunningTime="2026-02-17 16:18:39.108271392 +0000 UTC m=+1431.525289370" watchObservedRunningTime="2026-02-17 16:18:39.14113322 +0000 UTC m=+1431.558151198" Feb 17 16:18:39 crc kubenswrapper[4829]: I0217 16:18:39.158975 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:40 crc kubenswrapper[4829]: I0217 16:18:40.073470 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" exitCode=0 Feb 17 16:18:40 crc kubenswrapper[4829]: I0217 16:18:40.076993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.086521 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c" exitCode=0 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.086586 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c"} Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.134392 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.180473 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-744588c6bd-fsx8x" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.255890 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.256136 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" containerID="cri-o://59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" gracePeriod=30 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.256298 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" containerID="cri-o://5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" gracePeriod=30 Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.262695 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": EOF" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.499778 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.558119 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:41 crc kubenswrapper[4829]: I0217 16:18:41.558370 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" containerID="cri-o://ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" gracePeriod=10 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.034527 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.097509 4829 generic.go:334] "Generic (PLEG): container finished" podID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerID="ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" exitCode=0 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.097604 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba"} Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.101447 4829 generic.go:334] "Generic (PLEG): container finished" podID="6f8d0651-0829-4225-b98a-ffb3453058db" containerID="59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" exitCode=143 Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.101750 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91"} Feb 17 16:18:42 crc kubenswrapper[4829]: I0217 16:18:42.931647 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046142 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046203 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046246 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046320 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046398 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.046469 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") pod \"d9d1bf31-65a7-4292-b06e-4f862ba023da\" (UID: \"d9d1bf31-65a7-4292-b06e-4f862ba023da\") " Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.053362 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp" (OuterVolumeSpecName: "kube-api-access-5rfwp") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "kube-api-access-5rfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.106085 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.123788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" event={"ID":"d9d1bf31-65a7-4292-b06e-4f862ba023da","Type":"ContainerDied","Data":"38d0e25b8babc9cbba47e39ba8aa5d5221b3d6a4b4fa42411be271008d0092b7"} Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.123864 4829 scope.go:117] "RemoveContainer" containerID="ab59b96df8b9c4b5fed19ab396ba8108a10f6a3270c35f6be353ea9030ffd2ba" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.124049 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-rnr9j" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.132445 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.141644 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.147051 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149521 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149566 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149597 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149611 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfwp\" (UniqueName: \"kubernetes.io/projected/d9d1bf31-65a7-4292-b06e-4f862ba023da-kube-api-access-5rfwp\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.149624 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.164243 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config" (OuterVolumeSpecName: "config") pod "d9d1bf31-65a7-4292-b06e-4f862ba023da" (UID: "d9d1bf31-65a7-4292-b06e-4f862ba023da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.251833 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d1bf31-65a7-4292-b06e-4f862ba023da-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.283027 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.283069 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.422191 4829 scope.go:117] "RemoveContainer" containerID="496d1fd72279208f2c820bbddfa7af79517ed24f869ee5180ffcd99ed7e5f623" Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.464410 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:43 crc kubenswrapper[4829]: I0217 16:18:43.480740 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-rnr9j"] Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.293419 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" path="/var/lib/kubelet/pods/d9d1bf31-65a7-4292-b06e-4f862ba023da/volumes" Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.341928 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:44 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:44 crc kubenswrapper[4829]: > Feb 17 16:18:44 crc kubenswrapper[4829]: I0217 16:18:44.689959 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b56799c5b-dmgjh" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.192:9696/\": dial tcp 10.217.0.192:9696: connect: connection refused" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.155511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerStarted","Data":"4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71"} Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.158349 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.177943 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g92l5" podStartSLOduration=4.368763597 podStartE2EDuration="9.177922161s" podCreationTimestamp="2026-02-17 16:18:36 +0000 UTC" firstStartedPulling="2026-02-17 16:18:39.043952415 +0000 UTC m=+1431.460970393" lastFinishedPulling="2026-02-17 16:18:43.853110979 +0000 UTC m=+1436.270128957" observedRunningTime="2026-02-17 16:18:45.173403479 +0000 UTC m=+1437.590421457" watchObservedRunningTime="2026-02-17 16:18:45.177922161 +0000 UTC m=+1437.594940149" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.877063 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:48766->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:18:45 crc kubenswrapper[4829]: I0217 16:18:45.877092 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:48752->10.217.0.201:9311: read: connection reset by peer" Feb 17 16:18:45 crc kubenswrapper[4829]: W0217 16:18:45.966395 4829 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f8d0651_0829_4225_b98a_ffb3453058db.slice/crio-550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c": error while statting cgroup v2: [read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f8d0651_0829_4225_b98a_ffb3453058db.slice/crio-550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c/pids.current: no such device], continuing to push stats Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.175425 4829 generic.go:334] "Generic (PLEG): container finished" podID="6f8d0651-0829-4225-b98a-ffb3453058db" containerID="5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" exitCode=0 Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.175722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34"} Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.177912 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" exitCode=0 Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.177979 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.383198 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.433994 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.557896 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629635 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629752 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629843 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.629937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.630001 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") pod \"6f8d0651-0829-4225-b98a-ffb3453058db\" (UID: \"6f8d0651-0829-4225-b98a-ffb3453058db\") " Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.631260 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs" (OuterVolumeSpecName: "logs") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.645415 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.670045 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57" (OuterVolumeSpecName: "kube-api-access-llm57") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "kube-api-access-llm57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.699213 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.733734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data" (OuterVolumeSpecName: "config-data") pod "6f8d0651-0829-4225-b98a-ffb3453058db" (UID: "6f8d0651-0829-4225-b98a-ffb3453058db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736082 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736239 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736324 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8d0651-0829-4225-b98a-ffb3453058db-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736396 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llm57\" (UniqueName: \"kubernetes.io/projected/6f8d0651-0829-4225-b98a-ffb3453058db-kube-api-access-llm57\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.736477 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8d0651-0829-4225-b98a-ffb3453058db-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:46 crc kubenswrapper[4829]: I0217 16:18:46.757883 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.061758 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.101984 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.102048 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191781 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" event={"ID":"6f8d0651-0829-4225-b98a-ffb3453058db","Type":"ContainerDied","Data":"550df1a796e4c45c9c8a7458f908048052703b52bec5b20cec495a46e424531c"} Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191853 4829 scope.go:117] "RemoveContainer" containerID="5da1aee1082686cb967b55c427a0c77e9f11ca50180db040e27204c98b593f34" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.191859 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" containerID="cri-o://d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" gracePeriod=30 Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.192033 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb4f96fd4-bmlr5" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.193349 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" containerID="cri-o://52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" gracePeriod=30 Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.235526 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.236369 4829 scope.go:117] "RemoveContainer" containerID="59ce0222ace9494d94e34f1486bd381877db351fd775b362dabafad11a1dce91" Feb 17 16:18:47 crc kubenswrapper[4829]: I0217 16:18:47.248417 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cb4f96fd4-bmlr5"] Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.056865 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5598cc6dcc-p2b29" Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.185771 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.185968 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59566c7c9b-gpfcg" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" containerID="cri-o://894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" gracePeriod=30 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.186304 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59566c7c9b-gpfcg" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" containerID="cri-o://5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" gracePeriod=30 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.224807 4829 generic.go:334] "Generic (PLEG): container finished" podID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" exitCode=0 Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.224926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.372212 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" path="/var/lib/kubelet/pods/6f8d0651-0829-4225-b98a-ffb3453058db/volumes" Feb 17 16:18:48 crc kubenswrapper[4829]: I0217 16:18:48.598776 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:48 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:48 crc kubenswrapper[4829]: > Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340471 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340715 4829 generic.go:334] "Generic (PLEG): container finished" podID="75783ffe-a672-4585-ae18-3c162d659ee7" containerID="92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" exitCode=137 Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.340791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.372373 4829 generic.go:334] "Generic (PLEG): container finished" podID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerID="5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" exitCode=0 Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.372667 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.422081 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerStarted","Data":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.467288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74rcl" podStartSLOduration=4.274159984 podStartE2EDuration="12.467266589s" podCreationTimestamp="2026-02-17 16:18:37 +0000 UTC" firstStartedPulling="2026-02-17 16:18:40.097149203 +0000 UTC m=+1432.514167181" lastFinishedPulling="2026-02-17 16:18:48.290255808 +0000 UTC m=+1440.707273786" observedRunningTime="2026-02-17 16:18:49.448951685 +0000 UTC m=+1441.865969663" watchObservedRunningTime="2026-02-17 16:18:49.467266589 +0000 UTC m=+1441.884284567" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.515981 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.516055 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637828 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637853 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637939 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.637992 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") pod \"75783ffe-a672-4585-ae18-3c162d659ee7\" (UID: \"75783ffe-a672-4585-ae18-3c162d659ee7\") " Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.650474 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.651355 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh" (OuterVolumeSpecName: "kube-api-access-fdsqh") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "kube-api-access-fdsqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.747094 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.747377 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdsqh\" (UniqueName: \"kubernetes.io/projected/75783ffe-a672-4585-ae18-3c162d659ee7-kube-api-access-fdsqh\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.834936 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config" (OuterVolumeSpecName: "config") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.837132 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.849264 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.849307 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.902779 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "75783ffe-a672-4585-ae18-3c162d659ee7" (UID: "75783ffe-a672-4585-ae18-3c162d659ee7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.934126 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.942410 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:49 crc kubenswrapper[4829]: I0217 16:18:49.954835 4829 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75783ffe-a672-4585-ae18-3c162d659ee7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056592 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056735 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056797 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.056850 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") pod \"2407c845-36e5-40f1-ae75-2b6c5fc31624\" (UID: \"2407c845-36e5-40f1-ae75-2b6c5fc31624\") " Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.057885 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.062823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.063169 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts" (OuterVolumeSpecName: "scripts") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.063773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf" (OuterVolumeSpecName: "kube-api-access-zprpf") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "kube-api-access-zprpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.151835 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.161233 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8b56fc4d-7pnvr" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171192 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2407c845-36e5-40f1-ae75-2b6c5fc31624-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171217 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171229 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171238 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprpf\" (UniqueName: \"kubernetes.io/projected/2407c845-36e5-40f1-ae75-2b6c5fc31624-kube-api-access-zprpf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.171247 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.233843 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data" (OuterVolumeSpecName: "config-data") pod "2407c845-36e5-40f1-ae75-2b6c5fc31624" (UID: "2407c845-36e5-40f1-ae75-2b6c5fc31624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.236361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.274139 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2407c845-36e5-40f1-ae75-2b6c5fc31624-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.310763 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.432213 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440802 4829 generic.go:334] "Generic (PLEG): container finished" podID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" exitCode=0 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440909 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2407c845-36e5-40f1-ae75-2b6c5fc31624","Type":"ContainerDied","Data":"da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.440928 4829 scope.go:117] "RemoveContainer" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.441078 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.444780 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.444980 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c89899bcb-82htl" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" containerID="cri-o://0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" gracePeriod=30 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.445216 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b56799c5b-dmgjh" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.446009 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b56799c5b-dmgjh" event={"ID":"75783ffe-a672-4585-ae18-3c162d659ee7","Type":"ContainerDied","Data":"b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5"} Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.446798 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c89899bcb-82htl" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" containerID="cri-o://03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" gracePeriod=30 Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.632790 4829 scope.go:117] "RemoveContainer" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.653637 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.673883 4829 scope.go:117] "RemoveContainer" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.675056 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": container with ID starting with 52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6 not found: ID does not exist" containerID="52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675086 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6"} err="failed to get container status \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": rpc error: code = NotFound desc = could not find container \"52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6\": container with ID starting with 52729e811bb91fc592b1240acaa3541fd75e0103ba5d4763d7c5234460ee1fa6 not found: ID does not exist" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675107 4829 scope.go:117] "RemoveContainer" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.675484 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": container with ID starting with d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045 not found: ID does not exist" containerID="d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675514 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045"} err="failed to get container status \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": rpc error: code = NotFound desc = could not find container \"d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045\": container with ID starting with d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045 not found: ID does not exist" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.675529 4829 scope.go:117] "RemoveContainer" containerID="039822dbf3bb46f9cc235cbf0f2e803e2a57b16d0e295844a9337ee2c54bdeef" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.676410 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b56799c5b-dmgjh"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.694383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.705356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719325 4829 scope.go:117] "RemoveContainer" containerID="92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719461 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719909 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719919 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719932 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719938 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719949 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719973 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="init" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.719978 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="init" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.719999 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720005 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720020 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720035 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: E0217 16:18:50.720046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720052 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720288 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api-log" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720309 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d1bf31-65a7-4292-b06e-4f862ba023da" containerName="dnsmasq-dns" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720325 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-httpd" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720333 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8d0651-0829-4225-b98a-ffb3453058db" containerName="barbican-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720349 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" containerName="neutron-api" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720360 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="probe" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.720370 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" containerName="cinder-scheduler" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.723253 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.729058 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.729201 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733762 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733866 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.733979 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.734013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.734038 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835607 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835886 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.835991 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.836010 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.836651 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0feacb21-5300-40f2-bee7-fac4613c2977-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.840237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.840927 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-scripts\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.841166 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.842251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0feacb21-5300-40f2-bee7-fac4613c2977-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:50 crc kubenswrapper[4829]: I0217 16:18:50.859768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb64l\" (UniqueName: \"kubernetes.io/projected/0feacb21-5300-40f2-bee7-fac4613c2977-kube-api-access-xb64l\") pod \"cinder-scheduler-0\" (UID: \"0feacb21-5300-40f2-bee7-fac4613c2977\") " pod="openstack/cinder-scheduler-0" Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.040076 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.462092 4829 generic.go:334] "Generic (PLEG): container finished" podID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerID="0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" exitCode=143 Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.462469 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca"} Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.563029 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:18:51 crc kubenswrapper[4829]: I0217 16:18:51.912400 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.291019 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2407c845-36e5-40f1-ae75-2b6c5fc31624" path="/var/lib/kubelet/pods/2407c845-36e5-40f1-ae75-2b6c5fc31624/volumes" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.292642 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75783ffe-a672-4585-ae18-3c162d659ee7" path="/var/lib/kubelet/pods/75783ffe-a672-4585-ae18-3c162d659ee7/volumes" Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.509742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"28ac3de4c1a189d11613ed8d58c9c4b54a79c2bcb3247b57f94a9a0ff335382d"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.509784 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"69036bfd3fbb9296e310bf3a04b61aef294ebb90f30d53d6ab6e737f0c120606"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.536001 4829 generic.go:334] "Generic (PLEG): container finished" podID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerID="894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" exitCode=0 Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.536046 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6"} Feb 17 16:18:52 crc kubenswrapper[4829]: I0217 16:18:52.549118 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-868ff7b66c-lx7qv" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.232463 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328776 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328916 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.328962 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.329061 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.329119 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") pod \"d027908d-4d46-40f2-a1d9-a6353e1d17be\" (UID: \"d027908d-4d46-40f2-a1d9-a6353e1d17be\") " Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.344813 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.383192 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x" (OuterVolumeSpecName: "kube-api-access-r7x8x") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "kube-api-access-r7x8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.446807 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.446842 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7x8x\" (UniqueName: \"kubernetes.io/projected/d027908d-4d46-40f2-a1d9-a6353e1d17be-kube-api-access-r7x8x\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.465788 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config" (OuterVolumeSpecName: "config") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.502352 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.550067 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.550407 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577740 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59566c7c9b-gpfcg" event={"ID":"d027908d-4d46-40f2-a1d9-a6353e1d17be","Type":"ContainerDied","Data":"97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668"} Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577792 4829 scope.go:117] "RemoveContainer" containerID="5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.577937 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59566c7c9b-gpfcg" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.585287 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d027908d-4d46-40f2-a1d9-a6353e1d17be" (UID: "d027908d-4d46-40f2-a1d9-a6353e1d17be"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.602090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0feacb21-5300-40f2-bee7-fac4613c2977","Type":"ContainerStarted","Data":"2174bb841778409a7defc29514cec46ed8eaee6c9fd6801785291f62b2a0736b"} Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.652265 4829 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d027908d-4d46-40f2-a1d9-a6353e1d17be-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.694069 4829 scope.go:117] "RemoveContainer" containerID="894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.913172 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.913154264 podStartE2EDuration="3.913154264s" podCreationTimestamp="2026-02-17 16:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:53.625336482 +0000 UTC m=+1446.042354460" watchObservedRunningTime="2026-02-17 16:18:53.913154264 +0000 UTC m=+1446.330172242" Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.919991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:53 crc kubenswrapper[4829]: I0217 16:18:53.930202 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59566c7c9b-gpfcg"] Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.293760 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" path="/var/lib/kubelet/pods/d027908d-4d46-40f2-a1d9-a6353e1d17be/volumes" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.352808 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:54 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:54 crc kubenswrapper[4829]: > Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568246 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:54 crc kubenswrapper[4829]: E0217 16:18:54.568864 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568886 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: E0217 16:18:54.568938 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.568946 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.569258 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-api" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.569284 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d027908d-4d46-40f2-a1d9-a6353e1d17be" containerName="neutron-httpd" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.570525 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.572608 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.573223 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lrgxv" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.574344 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.588743 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.639867 4829 generic.go:334] "Generic (PLEG): container finished" podID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerID="03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" exitCode=0 Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.640735 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c"} Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674501 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674558 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674725 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.674754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778187 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.778218 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.780177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.784503 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.794011 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4561ce68-ba71-42ad-95ec-de8b705a06ef-openstack-config-secret\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.805200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9l6r\" (UniqueName: \"kubernetes.io/projected/4561ce68-ba71-42ad-95ec-de8b705a06ef-kube-api-access-w9l6r\") pod \"openstackclient\" (UID: \"4561ce68-ba71-42ad-95ec-de8b705a06ef\") " pod="openstack/openstackclient" Feb 17 16:18:54 crc kubenswrapper[4829]: I0217 16:18:54.890438 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.094150 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.185761 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.185874 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186244 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186308 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs" (OuterVolumeSpecName: "logs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186477 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.186546 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") pod \"e42d92c8-c673-4220-bee5-af7b9151fe77\" (UID: \"e42d92c8-c673-4220-bee5-af7b9151fe77\") " Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.187508 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42d92c8-c673-4220-bee5-af7b9151fe77-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.199966 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts" (OuterVolumeSpecName: "scripts") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.201949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6" (OuterVolumeSpecName: "kube-api-access-v8mk6") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "kube-api-access-v8mk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.264461 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data" (OuterVolumeSpecName: "config-data") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291827 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291859 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.291869 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mk6\" (UniqueName: \"kubernetes.io/projected/e42d92c8-c673-4220-bee5-af7b9151fe77-kube-api-access-v8mk6\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.297393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.319787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.334484 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e42d92c8-c673-4220-bee5-af7b9151fe77" (UID: "e42d92c8-c673-4220-bee5-af7b9151fe77"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393446 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393478 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.393488 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42d92c8-c673-4220-bee5-af7b9151fe77-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.465078 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.650936 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c89899bcb-82htl" event={"ID":"e42d92c8-c673-4220-bee5-af7b9151fe77","Type":"ContainerDied","Data":"5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33"} Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.651202 4829 scope.go:117] "RemoveContainer" containerID="03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.650976 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c89899bcb-82htl" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.652423 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4561ce68-ba71-42ad-95ec-de8b705a06ef","Type":"ContainerStarted","Data":"28b2e37b83015dfe816dba6c3ec6a070fe3a9ee96638e3d82b93345cb40a44f0"} Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.678872 4829 scope.go:117] "RemoveContainer" containerID="0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca" Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.700370 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:55 crc kubenswrapper[4829]: I0217 16:18:55.717262 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c89899bcb-82htl"] Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.041386 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.302751 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" path="/var/lib/kubelet/pods/e42d92c8-c673-4220-bee5-af7b9151fe77/volumes" Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.671943 4829 generic.go:334] "Generic (PLEG): container finished" podID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerID="bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" exitCode=137 Feb 17 16:18:56 crc kubenswrapper[4829]: I0217 16:18:56.672039 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816"} Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.163152 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234522 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234749 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234777 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234866 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234895 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.234966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") pod \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\" (UID: \"eebac8aa-36b1-4a0d-9490-c34c7d137be2\") " Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.235815 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.235995 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.243452 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts" (OuterVolumeSpecName: "scripts") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.247789 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx" (OuterVolumeSpecName: "kube-api-access-7vthx") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "kube-api-access-7vthx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.331775 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342006 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342036 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342045 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eebac8aa-36b1-4a0d-9490-c34c7d137be2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342053 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vthx\" (UniqueName: \"kubernetes.io/projected/eebac8aa-36b1-4a0d-9490-c34c7d137be2-kube-api-access-7vthx\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.342062 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.397497 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.400396 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data" (OuterVolumeSpecName: "config-data") pod "eebac8aa-36b1-4a0d-9490-c34c7d137be2" (UID: "eebac8aa-36b1-4a0d-9490-c34c7d137be2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.443902 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.443929 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eebac8aa-36b1-4a0d-9490-c34c7d137be2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690208 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eebac8aa-36b1-4a0d-9490-c34c7d137be2","Type":"ContainerDied","Data":"9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412"} Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690258 4829 scope.go:117] "RemoveContainer" containerID="bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.690379 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.724680 4829 scope.go:117] "RemoveContainer" containerID="2f42fdb3e6b58123f6d05003037629f14a228399c44f6112a62baf583ce48ae0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.728384 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.738790 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748467 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748943 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748977 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.748992 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.748998 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749013 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749019 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749036 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: E0217 16:18:57.749048 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749238 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-api" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749250 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-central-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749260 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="ceilometer-notification-agent" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749277 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42d92c8-c673-4220-bee5-af7b9151fe77" containerName="placement-log" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749286 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="sg-core" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.749319 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" containerName="proxy-httpd" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.752177 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761507 4829 scope.go:117] "RemoveContainer" containerID="4a478894a78a66f181ae1506103e15663c6569c4e743796b3cc8c8784e953e13" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761691 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.761724 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.762838 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.806699 4829 scope.go:117] "RemoveContainer" containerID="9f77c7b5d43ea83dd93b3ec16678cced33123c4f38d6151cc624259450978d90" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851316 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851396 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851516 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851533 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.851907 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953067 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953146 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953191 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953219 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953239 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953301 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953567 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.953674 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.958617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.962531 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.963652 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.963822 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:57 crc kubenswrapper[4829]: I0217 16:18:57.984341 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"ceilometer-0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " pod="openstack/ceilometer-0" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.080619 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.104516 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.104618 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.162526 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:58 crc kubenswrapper[4829]: > Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.186243 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.306052 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eebac8aa-36b1-4a0d-9490-c34c7d137be2" path="/var/lib/kubelet/pods/eebac8aa-36b1-4a0d-9490-c34c7d137be2/volumes" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.624259 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.718012 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"b15b8a2c2fe4022bce337bd6c570aad6d1fe85a99014bfa877c56e943e1fb42f"} Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.775306 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:18:58 crc kubenswrapper[4829]: I0217 16:18:58.828564 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:18:59 crc kubenswrapper[4829]: I0217 16:18:59.732898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4"} Feb 17 16:19:00 crc kubenswrapper[4829]: I0217 16:19:00.747387 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74rcl" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" containerID="cri-o://801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" gracePeriod=2 Feb 17 16:19:00 crc kubenswrapper[4829]: I0217 16:19:00.747951 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.297969 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.304207 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314090 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314168 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.314357 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nfxjw" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333275 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.333289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.351639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.388280 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.389767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.399258 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.402684 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492230 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492297 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492320 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.492684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.506020 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.509312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.552989 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.562479 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"heat-engine-75c6bfd58d-6ndtv\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.580038 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595221 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595736 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-utilities" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595749 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-utilities" Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595774 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-content" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595781 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="extract-content" Feb 17 16:19:01 crc kubenswrapper[4829]: E0217 16:19:01.595788 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.595795 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.596016 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerName="registry-server" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.597157 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604109 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.601235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604938 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.604974 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") pod \"8fb22913-2026-46cd-b4b8-5ac091e23320\" (UID: \"8fb22913-2026-46cd-b4b8-5ac091e23320\") " Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605158 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605188 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605215 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605232 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605277 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605322 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605351 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605440 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605458 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605511 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.605978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities" (OuterVolumeSpecName: "utilities") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.607608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc" (OuterVolumeSpecName: "kube-api-access-xl6kc") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "kube-api-access-xl6kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.623677 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.640000 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.641347 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.645287 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.651948 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.697262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708681 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708725 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708748 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708795 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708842 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708889 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708917 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708952 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.708990 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709036 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709076 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709114 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709183 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl6kc\" (UniqueName: \"kubernetes.io/projected/8fb22913-2026-46cd-b4b8-5ac091e23320-kube-api-access-xl6kc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709196 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.709969 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.710237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.710415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.711099 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.711764 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.728131 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.740160 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.740770 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"dnsmasq-dns-7d978555f9-lb9kf\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.741039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.741773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"heat-cfnapi-7b6b59579d-8dd2k\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.763142 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb22913-2026-46cd-b4b8-5ac091e23320" (UID: "8fb22913-2026-46cd-b4b8-5ac091e23320"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.765944 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770133 4829 generic.go:334] "Generic (PLEG): container finished" podID="8fb22913-2026-46cd-b4b8-5ac091e23320" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" exitCode=0 Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770196 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74rcl" event={"ID":"8fb22913-2026-46cd-b4b8-5ac091e23320","Type":"ContainerDied","Data":"ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6"} Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770268 4829 scope.go:117] "RemoveContainer" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.770593 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74rcl" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.825824 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826078 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826163 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.826297 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb22913-2026-46cd-b4b8-5ac091e23320-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.833377 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.833704 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.845617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.846249 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"heat-api-58844cd98c-2snd2\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.856850 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.885675 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74rcl"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.972681 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.974817 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.976979 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.977190 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.978433 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:19:01 crc kubenswrapper[4829]: I0217 16:19:01.997506 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.030509 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032354 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032423 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032470 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032497 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032587 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032678 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.032774 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.035344 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.040423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125029 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125257 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" containerID="cri-o://9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" gracePeriod=30 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.125718 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" containerID="cri-o://40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" gracePeriod=30 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134431 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134463 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134497 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134520 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134663 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.134707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.135411 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-run-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.135491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-log-httpd\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.140557 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-public-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.140655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-combined-ca-bundle\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.141773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-etc-swift\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.142002 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-config-data\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.145657 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-internal-tls-certs\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.158614 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8sj\" (UniqueName: \"kubernetes.io/projected/cd5d005a-eb7a-4cbc-932f-2640cb8068eb-kube-api-access-gx8sj\") pod \"swift-proxy-6d69d97dcf-pdd69\" (UID: \"cd5d005a-eb7a-4cbc-932f-2640cb8068eb\") " pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.298811 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb22913-2026-46cd-b4b8-5ac091e23320" path="/var/lib/kubelet/pods/8fb22913-2026-46cd-b4b8-5ac091e23320/volumes" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.301648 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.781051 4829 generic.go:334] "Generic (PLEG): container finished" podID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerID="9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" exitCode=143 Feb 17 16:19:02 crc kubenswrapper[4829]: I0217 16:19:02.781093 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b"} Feb 17 16:19:04 crc kubenswrapper[4829]: I0217 16:19:04.333519 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" probeResult="failure" output=< Feb 17 16:19:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:19:04 crc kubenswrapper[4829]: > Feb 17 16:19:05 crc kubenswrapper[4829]: E0217 16:19:05.132703 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-conmon-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-conmon-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-conmon-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:05 crc kubenswrapper[4829]: E0217 16:19:05.132735 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-conmon-03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-9d0b3b2a7a8417fa779edb964dd07c39faa76eca80a9015f85d3a3ffeec8b412\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-da53a4f46a183fda7d4a8a2fd2c1c549a80db6ecdf192e1a02f9c148212b3a14\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-conmon-0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-conmon-5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-ad5dc08aad2af8d474805b63e9bf5b65dcf4391a6c060911e623f397c8fd7cc6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-conmon-bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod631fedb6_df0e_40fa_a86c_40cc89db194f.slice/crio-conmon-98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-97d3cdf38fb75dcd44bef766fb5f6fb5d8809964ff8a389a8774115ffc31a668\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-5bbc8c82adf592838a09e124a4c8d97a2da2e5a2b14d072f6806eddcddad4ef3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-conmon-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-0eb5d402c5a16ce7a5de77d37d7bd15a23975372b6f21a7471677a6b26509aca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-03454f8a5a4185fdcc30b9fefad525167278c79e2cd84999901b2ae4d365ef2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd027908d_4d46_40f2_a1d9_a6353e1d17be.slice/crio-conmon-894efb7f9e72fad4ef1d3b9ea398082a3a3191b21766bbf4bb6a33d025c335f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeebac8aa_36b1_4a0d_9490_c34c7d137be2.slice/crio-bd188b22551f9d24576fea512ae9bbf4b1d37a79e576fa7ae1bb9b9b116ca816.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42d92c8_c673_4220_bee5_af7b9151fe77.slice/crio-5bb65468ff5468ee2dbc8d3d36f5bb84364892b4f15f7ba29491e72590af8f33\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb22913_2026_46cd_b4b8_5ac091e23320.slice/crio-801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f6d9a97_e9e4_4378_96b9_18fc0262bd9e.slice/crio-conmon-9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2407c845_36e5_40f1_ae75_2b6c5fc31624.slice/crio-conmon-d159cd6b8ffce4b12417670ba8a58dc4567cb0509bb3839445227bba9abf6045.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.295999 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.834204 4829 generic.go:334] "Generic (PLEG): container finished" podID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerID="40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" exitCode=0 Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.834275 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9"} Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.838027 4829 generic.go:334] "Generic (PLEG): container finished" podID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerID="98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" exitCode=137 Feb 17 16:19:05 crc kubenswrapper[4829]: I0217 16:19:05.838073 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1"} Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.020897 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": dial tcp 10.217.0.205:8776: connect: connection refused" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.223408 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.302181 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.784528 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.786305 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.808624 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.818713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.818814 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.902603 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.904305 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920687 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920802 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920827 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.920883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.921998 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.923672 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.953959 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"nova-api-db-create-cglz5\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.985024 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:07 crc kubenswrapper[4829]: I0217 16:19:07.997268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.015369 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022880 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.022985 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.023315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.024670 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.058229 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.060208 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.067838 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.103329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"nova-cell0-db-create-cnfbw\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.103404 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125389 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125455 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.125521 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.126290 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.163445 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.178298 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"nova-cell1-db-create-rzxtw\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.231368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.231487 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.232739 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.238170 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.256291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"nova-api-6c18-account-create-update-wl9ps\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.271644 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:08 crc kubenswrapper[4829]: E0217 16:19:08.276165 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.325432 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.327336 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.333199 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.342086 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.346543 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.446325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.446375 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.454416 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.456358 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.458105 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.461273 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.467313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548699 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548746 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.548805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.549779 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.568501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"nova-cell0-535d-account-create-update-fmkp6\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.650599 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.650988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.651325 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.654819 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.673811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"nova-cell1-3357-account-create-update-rg852\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.775689 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.912868 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g92l5" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" containerID="cri-o://4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" gracePeriod=2 Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.940780 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.942374 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.951959 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.979739 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:08 crc kubenswrapper[4829]: I0217 16:19:08.985969 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.036478 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.067155 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.069035 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071340 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071476 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071612 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071713 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071935 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.071988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.150610 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182350 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182394 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182441 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182545 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182561 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182641 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182679 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182697 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182741 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.182766 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.197655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.198912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.199689 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.200332 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-config-data-custom\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.202636 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59de3866-adfb-4a8d-87f2-b54af38332d0-combined-ca-bundle\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.210461 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.215180 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"heat-cfnapi-6d5f4d8b58-jzbm7\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.226623 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvw2b\" (UniqueName: \"kubernetes.io/projected/59de3866-adfb-4a8d-87f2-b54af38332d0-kube-api-access-vvw2b\") pod \"heat-engine-7db87d5bbf-dtdjh\" (UID: \"59de3866-adfb-4a8d-87f2-b54af38332d0\") " pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.287987 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288072 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288107 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.288168 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.297815 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.300102 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.302509 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.313619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"heat-api-647dbf4b4b-fgckf\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.373691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.400210 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.418155 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.927656 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerID="4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" exitCode=0 Feb 17 16:19:09 crc kubenswrapper[4829]: I0217 16:19:09.927718 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71"} Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.273144 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.302584 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.326101 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.327734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.330612 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.330726 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.384508 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.386145 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.388968 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.389818 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.393887 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.402856 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.414884 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.414999 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415081 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415145 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.415214 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517595 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517899 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.517988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518030 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518056 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518085 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518127 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.518249 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.537322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.537416 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-config-data-custom\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-combined-ca-bundle\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-internal-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.539991 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-public-tls-certs\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.549506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj7h\" (UniqueName: \"kubernetes.io/projected/be43e34b-d8ec-44cd-bc26-e0ce3c9797a7-kube-api-access-ckj7h\") pod \"heat-api-7bf669c95c-g7msn\" (UID: \"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7\") " pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620255 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620314 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620339 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620378 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620435 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.620495 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.626508 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data-custom\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.627645 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-internal-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.628140 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-public-tls-certs\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.628688 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-combined-ca-bundle\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.636680 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-config-data\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.648016 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nndmj\" (UniqueName: \"kubernetes.io/projected/5dfe4b1a-5f10-47f3-ab81-0807c468fab0-kube-api-access-nndmj\") pod \"heat-cfnapi-66bc7b8984-mg8sc\" (UID: \"5dfe4b1a-5f10-47f3-ab81-0807c468fab0\") " pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.705445 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:10 crc kubenswrapper[4829]: I0217 16:19:10.714687 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:11 crc kubenswrapper[4829]: I0217 16:19:11.662214 4829 scope.go:117] "RemoveContainer" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:11 crc kubenswrapper[4829]: I0217 16:19:11.949924 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.001755 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g92l5" event={"ID":"dcd8f257-bfbb-4393-b0b3-f1c955a73e05","Type":"ContainerDied","Data":"8564b30eb4354b49f93900e21450eee5beaaa5dd88d197e38f1082d1800edd61"} Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.001794 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g92l5" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091364 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091736 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.091804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") pod \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\" (UID: \"dcd8f257-bfbb-4393-b0b3-f1c955a73e05\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.093249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities" (OuterVolumeSpecName: "utilities") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.098663 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7" (OuterVolumeSpecName: "kube-api-access-4f2c7") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "kube-api-access-4f2c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.125371 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.161492 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcd8f257-bfbb-4393-b0b3-f1c955a73e05" (UID: "dcd8f257-bfbb-4393-b0b3-f1c955a73e05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.166972 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195135 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195368 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.195438 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f2c7\" (UniqueName: \"kubernetes.io/projected/dcd8f257-bfbb-4393-b0b3-f1c955a73e05-kube-api-access-4f2c7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.295901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296065 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296262 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296394 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296486 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296650 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296777 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.296926 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") pod \"631fedb6-df0e-40fa-a86c-40cc89db194f\" (UID: \"631fedb6-df0e-40fa-a86c-40cc89db194f\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297207 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs" (OuterVolumeSpecName: "logs") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297563 4829 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631fedb6-df0e-40fa-a86c-40cc89db194f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.297674 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631fedb6-df0e-40fa-a86c-40cc89db194f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.312076 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.313561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts" (OuterVolumeSpecName: "scripts") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.318917 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg" (OuterVolumeSpecName: "kube-api-access-pc9xg") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "kube-api-access-pc9xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.325107 4829 scope.go:117] "RemoveContainer" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.333799 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.393889 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408329 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408388 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408688 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408794 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.408873 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") pod \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\" (UID: \"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e\") " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409399 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409416 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409427 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc9xg\" (UniqueName: \"kubernetes.io/projected/631fedb6-df0e-40fa-a86c-40cc89db194f-kube-api-access-pc9xg\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409438 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.409979 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs" (OuterVolumeSpecName: "logs") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.425975 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts" (OuterVolumeSpecName: "scripts") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.430987 4829 scope.go:117] "RemoveContainer" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.432920 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data" (OuterVolumeSpecName: "config-data") pod "631fedb6-df0e-40fa-a86c-40cc89db194f" (UID: "631fedb6-df0e-40fa-a86c-40cc89db194f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.435282 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.444215 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": container with ID starting with 801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86 not found: ID does not exist" containerID="801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.444255 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86"} err="failed to get container status \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": rpc error: code = NotFound desc = could not find container \"801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86\": container with ID starting with 801e59ff8ee7671a8b9045948b9c1b03b0facef7f0da561ae9e30a5d01277e86 not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.444285 4829 scope.go:117] "RemoveContainer" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.450724 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": container with ID starting with 823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f not found: ID does not exist" containerID="823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.450759 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f"} err="failed to get container status \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": rpc error: code = NotFound desc = could not find container \"823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f\": container with ID starting with 823e30a0d5b3ab24135abb341dfe9e97a654c94bb930a9828deafc85fca5e02f not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.450784 4829 scope.go:117] "RemoveContainer" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.466645 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b" (OuterVolumeSpecName: "kube-api-access-88n9b") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "kube-api-access-88n9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: E0217 16:19:12.473720 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": container with ID starting with 3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635 not found: ID does not exist" containerID="3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.473765 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635"} err="failed to get container status \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": rpc error: code = NotFound desc = could not find container \"3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635\": container with ID starting with 3c95473e8c2a4663dc81b35d0708128a648226bd9f7695ead7faa875d3435635 not found: ID does not exist" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.473793 4829 scope.go:117] "RemoveContainer" containerID="4b83487854f03f5ff0ccc58af395439bf9661f4e5d484e018700308b43b7ec71" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.487745 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g92l5"] Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519207 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88n9b\" (UniqueName: \"kubernetes.io/projected/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-kube-api-access-88n9b\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519232 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519243 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519252 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631fedb6-df0e-40fa-a86c-40cc89db194f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.519263 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.540310 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (OuterVolumeSpecName: "glance") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.621049 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" " Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.638898 4829 scope.go:117] "RemoveContainer" containerID="002d286a9b9ffe9f086e7d8cf702319d5e23c19133157216074aeeba1f77068c" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.846163 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.846565 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537") on node "crc" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.853546 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.877912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.917656 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data" (OuterVolumeSpecName: "config-data") pod "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" (UID: "5f6d9a97-e9e4-4378-96b9-18fc0262bd9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928160 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928189 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928200 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:12 crc kubenswrapper[4829]: I0217 16:19:12.928212 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.012754 4829 scope.go:117] "RemoveContainer" containerID="c9dfdf23e042e518eb14bd2a583f5e689005df52681d28564d32884d32bcf23e" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.089409 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerStarted","Data":"8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.089864 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" containerID="cri-o://1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090138 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090430 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" containerID="cri-o://8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090483 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" containerID="cri-o://8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.090516 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" containerID="cri-o://8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" gracePeriod=30 Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.114723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerStarted","Data":"32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.123642 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.759696388 podStartE2EDuration="16.123622872s" podCreationTimestamp="2026-02-17 16:18:57 +0000 UTC" firstStartedPulling="2026-02-17 16:18:58.6127864 +0000 UTC m=+1451.029804378" lastFinishedPulling="2026-02-17 16:19:11.976712884 +0000 UTC m=+1464.393730862" observedRunningTime="2026-02-17 16:19:13.110467327 +0000 UTC m=+1465.527485315" watchObservedRunningTime="2026-02-17 16:19:13.123622872 +0000 UTC m=+1465.540640850" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"631fedb6-df0e-40fa-a86c-40cc89db194f","Type":"ContainerDied","Data":"af2b5045e812af170b758635252bbd670b210016e6af4379123eb4ce501709f0"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127670 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.127713 4829 scope.go:117] "RemoveContainer" containerID="98e744bcdd9be5961b51e77b35cc90441be77d71cce1b8bef4fe8bc337c90bd1" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.131098 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4561ce68-ba71-42ad-95ec-de8b705a06ef","Type":"ContainerStarted","Data":"32fa907e41420333e66cf2b4635d5ee91a924e5de9bf58928768552d6a7363bc"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.152248 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.129957342 podStartE2EDuration="19.152234555s" podCreationTimestamp="2026-02-17 16:18:54 +0000 UTC" firstStartedPulling="2026-02-17 16:18:55.469941779 +0000 UTC m=+1447.886959757" lastFinishedPulling="2026-02-17 16:19:11.492218982 +0000 UTC m=+1463.909236970" observedRunningTime="2026-02-17 16:19:13.149956243 +0000 UTC m=+1465.566974221" watchObservedRunningTime="2026-02-17 16:19:13.152234555 +0000 UTC m=+1465.569252533" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.153612 4829 scope.go:117] "RemoveContainer" containerID="7222d84f804eb7f9120513124beef6529982f4f615916fca1210f03ec5f17499" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.156248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5f6d9a97-e9e4-4378-96b9-18fc0262bd9e","Type":"ContainerDied","Data":"26df09ac78a076eb0f2fab2e97427288c9dbe4295d421971b90f039ccad0b50a"} Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.156430 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.180223 4829 scope.go:117] "RemoveContainer" containerID="40310d84f543af3c2d3e3aa547d42eb47ba2d1415fd23ff16b43314d27c1f9b9" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.197107 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.227879 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.261031 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262277 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262384 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262446 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-content" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262504 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-content" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262568 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262720 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262790 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262848 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.262909 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-utilities" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.262960 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="extract-utilities" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.263030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263083 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: E0217 16:19:13.263136 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263186 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263535 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263620 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" containerName="registry-server" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263689 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api-log" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263917 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" containerName="glance-httpd" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.263978 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265131 4829 scope.go:117] "RemoveContainer" containerID="9eee4833da9448f3fa257132de5b20630527c49225df6892119d6da497d58c5b" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265647 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.265826 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.268232 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.270667 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.270982 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.289181 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.312692 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.332339 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.348334 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.348720 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.351238 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.351552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.353528 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.358680 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.389707 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66bc7b8984-mg8sc"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.433487 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550971 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.550994 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551013 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551105 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551155 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551176 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551198 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551219 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551357 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.551434 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656093 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656117 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656139 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656283 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656303 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656334 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656386 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656402 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656418 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656447 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.656493 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.671175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.673162 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/816bca39-deec-496c-bb97-40d4ad4ca878-logs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.674349 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.676187 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/816bca39-deec-496c-bb97-40d4ad4ca878-etc-machine-id\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.676522 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4708c572-1818-4307-8667-0e2cb60f5635-logs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.702378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-public-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.732811 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d69d97dcf-pdd69"] Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.738234 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.742998 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-scripts\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.743902 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.747624 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwx7\" (UniqueName: \"kubernetes.io/projected/816bca39-deec-496c-bb97-40d4ad4ca878-kube-api-access-fdwx7\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.747771 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748130 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708c572-1818-4307-8667-0e2cb60f5635-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748428 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-config-data-custom\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.748709 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.755221 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816bca39-deec-496c-bb97-40d4ad4ca878-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"816bca39-deec-496c-bb97-40d4ad4ca878\") " pod="openstack/cinder-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.774955 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5f6\" (UniqueName: \"kubernetes.io/projected/4708c572-1818-4307-8667-0e2cb60f5635-kube-api-access-fz5f6\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.841420 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.841462 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64c8e47add696cdcc960205f22041f4e7cd73f409784d529f450330c5e4d9560/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.957556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9ee397-19ef-4ddb-a1d0-ee1e4c3fa537\") pod \"glance-default-internal-api-0\" (UID: \"4708c572-1818-4307-8667-0e2cb60f5635\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:19:13 crc kubenswrapper[4829]: I0217 16:19:13.990386 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.037553 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227334 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" exitCode=2 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227657 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" exitCode=0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.227726 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.245025 4829 generic.go:334] "Generic (PLEG): container finished" podID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerID="a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df" exitCode=0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.245083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerDied","Data":"a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.255649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"9b7829ddff737dae110188099ffcfcca290e157b306ee21c83290ddc54364056"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.260731 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" event={"ID":"5dfe4b1a-5f10-47f3-ab81-0807c468fab0","Type":"ContainerStarted","Data":"77da194c262ed24f7e5808a948240e19e60fa35611f92398267d500dd975f8ec"} Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.306691 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice/crio-ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb WatchSource:0}: Error finding container ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb: Status 404 returned error can't find the container with id ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.324224 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6d9a97-e9e4-4378-96b9-18fc0262bd9e" path="/var/lib/kubelet/pods/5f6d9a97-e9e4-4378-96b9-18fc0262bd9e/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.325780 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" path="/var/lib/kubelet/pods/631fedb6-df0e-40fa-a86c-40cc89db194f/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.330963 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd8f257-bfbb-4393-b0b3-f1c955a73e05" path="/var/lib/kubelet/pods/dcd8f257-bfbb-4393-b0b3-f1c955a73e05/volumes" Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.331966 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.332010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7db87d5bbf-dtdjh"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.332025 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"3cfd5b4a2eec48fa3b356560508d7b1e10c91f89ca2f91c17d90090a20ce014f"} Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.337377 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.349656 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.360214 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.441049 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.479814 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.499567 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca WatchSource:0}: Error finding container 0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca: Status 404 returned error can't find the container with id 0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.502246 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.530911 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice/crio-382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101 WatchSource:0}: Error finding container 382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101: Status 404 returned error can't find the container with id 382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.535827 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.556086 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.570485 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.583332 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7bf669c95c-g7msn"] Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.590772 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531a6d2a_8cc6_4d30_a906_826fba92e926.slice/crio-4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d WatchSource:0}: Error finding container 4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d: Status 404 returned error can't find the container with id 4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d Feb 17 16:19:14 crc kubenswrapper[4829]: W0217 16:19:14.594221 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe43e34b_d8ec_44cd_bc26_e0ce3c9797a7.slice/crio-e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0 WatchSource:0}: Error finding container e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0: Status 404 returned error can't find the container with id e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0 Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.599493 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.776328 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:19:14 crc kubenswrapper[4829]: I0217 16:19:14.995313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:19:15 crc kubenswrapper[4829]: W0217 16:19:15.088263 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4708c572_1818_4307_8667_0e2cb60f5635.slice/crio-6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a WatchSource:0}: Error finding container 6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a: Status 404 returned error can't find the container with id 6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.300248 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"effd450865bb97a34c3515f6ac7f39ede1e9688582703d4a3c8820cf02cb2a03"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.313671 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"4ef0f0fdd58c449b7bd153a2e6b41e72b42f83d436a32880335f79f65dd269bd"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.313717 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d69d97dcf-pdd69" event={"ID":"cd5d005a-eb7a-4cbc-932f-2640cb8068eb","Type":"ContainerStarted","Data":"b3a69a41237582e8aca84cc6f5a06a0f5de9dc81fff09c20093ef9e26ef4033b"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.314597 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.314651 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.321739 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"95375bc6f346a6fe6af46463b8db7c53fa38cd84c3783df66e0720a068bc27d4"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.323267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bf669c95c-g7msn" event={"ID":"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7","Type":"ContainerStarted","Data":"e2d3cb40e0f7c737e7d08326339636d7f80d907c28e1cc6959a0389fccd4e8d0"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.328753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerStarted","Data":"8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.334605 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7db87d5bbf-dtdjh" event={"ID":"59de3866-adfb-4a8d-87f2-b54af38332d0","Type":"ContainerStarted","Data":"b253dfec5873832620fdac0a570303465bbc77ba3023c843e4bde8980efbe498"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.334821 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d69d97dcf-pdd69" podStartSLOduration=14.334807497 podStartE2EDuration="14.334807497s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.332303609 +0000 UTC m=+1467.749321587" watchObservedRunningTime="2026-02-17 16:19:15.334807497 +0000 UTC m=+1467.751825465" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.344066 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerStarted","Data":"0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.348793 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerStarted","Data":"92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.358441 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerStarted","Data":"ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.363950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerStarted","Data":"d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.378162 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" exitCode=0 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.378280 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.382684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerStarted","Data":"382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.385722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerStarted","Data":"4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.388457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"6bf117ae2a7c8f70b821d470abbc0ca7f07ea10c493ca49a54093d81d17eb67a"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.393406 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerStarted","Data":"ad768e518034fae299e9c917a36a527e20f09615bf89f800e1faf24578b3afd0"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.406295 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3357-account-create-update-rg852" podStartSLOduration=7.406274627 podStartE2EDuration="7.406274627s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.375777143 +0000 UTC m=+1467.792795111" watchObservedRunningTime="2026-02-17 16:19:15.406274627 +0000 UTC m=+1467.823292605" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.411269 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jpmqj" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" containerID="cri-o://c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" gracePeriod=2 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.412244 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerStarted","Data":"18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f"} Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.412267 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerStarted","Data":"456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123"} Feb 17 16:19:15 crc kubenswrapper[4829]: E0217 16:19:15.574760 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.950675 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-cnfbw" podStartSLOduration=8.950656365 podStartE2EDuration="8.950656365s" podCreationTimestamp="2026-02-17 16:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:15.438516697 +0000 UTC m=+1467.855534675" watchObservedRunningTime="2026-02-17 16:19:15.950656365 +0000 UTC m=+1468.367674333" Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.960773 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.960988 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" containerID="cri-o://c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" gracePeriod=30 Feb 17 16:19:15 crc kubenswrapper[4829]: I0217 16:19:15.961118 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" containerID="cri-o://53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" gracePeriod=30 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.432621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7db87d5bbf-dtdjh" event={"ID":"59de3866-adfb-4a8d-87f2-b54af38332d0","Type":"ContainerStarted","Data":"93cdf8724baf647e738ca65ba597eb6d07b02bcc0c0078364e778089de2c195d"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.434221 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.438338 4829 generic.go:334] "Generic (PLEG): container finished" podID="dcdf2448-5ccb-4351-b022-de49263fd521" containerID="a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.438386 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerDied","Data":"a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.441617 4829 generic.go:334] "Generic (PLEG): container finished" podID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerID="c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.441683 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.444097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"feecf691f350e4e4d2f1d885c2443527110811f43796b500d48e8dd87dbe621e"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.446680 4829 generic.go:334] "Generic (PLEG): container finished" podID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerID="a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.446723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.453529 4829 generic.go:334] "Generic (PLEG): container finished" podID="544f59e2-daea-45db-99b4-d9714f620a74" containerID="18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.453607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerDied","Data":"18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.459669 4829 generic.go:334] "Generic (PLEG): container finished" podID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerID="c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" exitCode=143 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.459710 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.461701 4829 generic.go:334] "Generic (PLEG): container finished" podID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerID="163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.461795 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerDied","Data":"163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.499217 4829 generic.go:334] "Generic (PLEG): container finished" podID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerID="19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.499293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerDied","Data":"19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.508339 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"c2c3295b07155a30b197a649d80dcf344571036b28fe9a727c6720bb13714e10"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.514915 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7db87d5bbf-dtdjh" podStartSLOduration=8.51489805 podStartE2EDuration="8.51489805s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:16.484519951 +0000 UTC m=+1468.901537919" watchObservedRunningTime="2026-02-17 16:19:16.51489805 +0000 UTC m=+1468.931916028" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.529926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerStarted","Data":"3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.531512 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.567784 4829 generic.go:334] "Generic (PLEG): container finished" podID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerID="7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38" exitCode=0 Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.568815 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerDied","Data":"7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38"} Feb 17 16:19:16 crc kubenswrapper[4829]: I0217 16:19:16.631132 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podStartSLOduration=15.631113628 podStartE2EDuration="15.631113628s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:16.603512114 +0000 UTC m=+1469.020530102" watchObservedRunningTime="2026-02-17 16:19:16.631113628 +0000 UTC m=+1469.048131606" Feb 17 16:19:17 crc kubenswrapper[4829]: I0217 16:19:17.058393 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="631fedb6-df0e-40fa-a86c-40cc89db194f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.205:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.223324 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.340793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") pod \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.341157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") pod \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\" (UID: \"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd\") " Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.347960 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" (UID: "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.365950 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc" (OuterVolumeSpecName: "kube-api-access-k7jpc") pod "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" (UID: "250927ce-8b7a-4c30-a13d-fd1cd34ee7cd"). InnerVolumeSpecName "kube-api-access-k7jpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.443751 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.443794 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7jpc\" (UniqueName: \"kubernetes.io/projected/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd-kube-api-access-k7jpc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.649835 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.652917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-535d-account-create-update-fmkp6" event={"ID":"250927ce-8b7a-4c30-a13d-fd1cd34ee7cd","Type":"ContainerDied","Data":"32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5"} Feb 17 16:19:18 crc kubenswrapper[4829]: I0217 16:19:18.652960 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cf1a46304425e8170ada9d27d1fe3ea419372ef7d0d302663da20e208f75b5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.419706 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.519369 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.563029 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.574889 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") pod \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.574960 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") pod \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\" (UID: \"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.576027 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" (UID: "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.576720 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.582153 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.592561 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7" (OuterVolumeSpecName: "kube-api-access-6pzt7") pod "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" (UID: "4ef7195e-f16e-4c5e-a84c-69c571ec7bb5"). InnerVolumeSpecName "kube-api-access-6pzt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.635825 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677128 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677203 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") pod \"544f59e2-daea-45db-99b4-d9714f620a74\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677318 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") pod \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677346 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") pod \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\" (UID: \"c8a9c261-a9c4-49c8-bec3-891a68d897b6\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677427 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") pod \"544f59e2-daea-45db-99b4-d9714f620a74\" (UID: \"544f59e2-daea-45db-99b4-d9714f620a74\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677700 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") pod \"c909da16-2d5d-4706-adb8-f8402ed9f01e\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") pod \"c909da16-2d5d-4706-adb8-f8402ed9f01e\" (UID: \"c909da16-2d5d-4706-adb8-f8402ed9f01e\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.677856 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") pod \"cb993f64-fe54-4fed-9aca-68e11a71eee7\" (UID: \"cb993f64-fe54-4fed-9aca-68e11a71eee7\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678321 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8a9c261-a9c4-49c8-bec3-891a68d897b6" (UID: "c8a9c261-a9c4-49c8-bec3-891a68d897b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678786 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8a9c261-a9c4-49c8-bec3-891a68d897b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678804 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pzt7\" (UniqueName: \"kubernetes.io/projected/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-kube-api-access-6pzt7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.678817 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.680325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities" (OuterVolumeSpecName: "utilities") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.681137 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "544f59e2-daea-45db-99b4-d9714f620a74" (UID: "544f59e2-daea-45db-99b4-d9714f620a74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.683835 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c909da16-2d5d-4706-adb8-f8402ed9f01e" (UID: "c909da16-2d5d-4706-adb8-f8402ed9f01e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3357-account-create-update-rg852" event={"ID":"c909da16-2d5d-4706-adb8-f8402ed9f01e","Type":"ContainerDied","Data":"ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698195 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff042944e2b958ca0caece25fe9a765fb2bd1f5586972bd81bc89c0ac3f1c5cb" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.698276 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703506 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cnfbw" event={"ID":"544f59e2-daea-45db-99b4-d9714f620a74","Type":"ContainerDied","Data":"456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703551 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456c5c0448d8ec1faa971231e10438f1601302fca69c304a6e9c3050cf24e123" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.703630 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.720010 4829 generic.go:334] "Generic (PLEG): container finished" podID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerID="53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.720723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rzxtw" event={"ID":"4ef7195e-f16e-4c5e-a84c-69c571ec7bb5","Type":"ContainerDied","Data":"8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740395 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8998dac78502100bdb3a85b31ad0119425fbccd39e048a65768629c37c7e203a" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.740449 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rzxtw" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.750676 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cglz5" event={"ID":"dcdf2448-5ccb-4351-b022-de49263fd521","Type":"ContainerDied","Data":"382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756290 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="382b8f70b20c0dcd96f5db8f6b40aa320fbdf6d8b0e75123759c44346bd81101" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.756978 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.766331 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx" (OuterVolumeSpecName: "kube-api-access-zkjwx") pod "c909da16-2d5d-4706-adb8-f8402ed9f01e" (UID: "c909da16-2d5d-4706-adb8-f8402ed9f01e"). InnerVolumeSpecName "kube-api-access-zkjwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.767249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7" (OuterVolumeSpecName: "kube-api-access-cgff7") pod "544f59e2-daea-45db-99b4-d9714f620a74" (UID: "544f59e2-daea-45db-99b4-d9714f620a74"). InnerVolumeSpecName "kube-api-access-cgff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.767735 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr" (OuterVolumeSpecName: "kube-api-access-65prr") pod "cb993f64-fe54-4fed-9aca-68e11a71eee7" (UID: "cb993f64-fe54-4fed-9aca-68e11a71eee7"). InnerVolumeSpecName "kube-api-access-65prr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.768746 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk" (OuterVolumeSpecName: "kube-api-access-nfxfk") pod "c8a9c261-a9c4-49c8-bec3-891a68d897b6" (UID: "c8a9c261-a9c4-49c8-bec3-891a68d897b6"). InnerVolumeSpecName "kube-api-access-nfxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785719 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6c18-account-create-update-wl9ps" event={"ID":"c8a9c261-a9c4-49c8-bec3-891a68d897b6","Type":"ContainerDied","Data":"92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785755 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d761f50191bc2917f54cdb298de6d2f4825b81d1a550f56ec4e8e5ad3c6209" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.785838 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6c18-account-create-update-wl9ps" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.788172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") pod \"dcdf2448-5ccb-4351-b022-de49263fd521\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.788345 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") pod \"dcdf2448-5ccb-4351-b022-de49263fd521\" (UID: \"dcdf2448-5ccb-4351-b022-de49263fd521\") " Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.799655 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcdf2448-5ccb-4351-b022-de49263fd521" (UID: "dcdf2448-5ccb-4351-b022-de49263fd521"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800155 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800183 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544f59e2-daea-45db-99b4-d9714f620a74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800210 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c909da16-2d5d-4706-adb8-f8402ed9f01e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800222 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkjwx\" (UniqueName: \"kubernetes.io/projected/c909da16-2d5d-4706-adb8-f8402ed9f01e-kube-api-access-zkjwx\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800236 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb993f64-fe54-4fed-9aca-68e11a71eee7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800246 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65prr\" (UniqueName: \"kubernetes.io/projected/cb993f64-fe54-4fed-9aca-68e11a71eee7-kube-api-access-65prr\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800265 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgff7\" (UniqueName: \"kubernetes.io/projected/544f59e2-daea-45db-99b4-d9714f620a74-kube-api-access-cgff7\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.800278 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfxfk\" (UniqueName: \"kubernetes.io/projected/c8a9c261-a9c4-49c8-bec3-891a68d897b6-kube-api-access-nfxfk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.847852 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc" (OuterVolumeSpecName: "kube-api-access-j6wlc") pod "dcdf2448-5ccb-4351-b022-de49263fd521" (UID: "dcdf2448-5ccb-4351-b022-de49263fd521"). InnerVolumeSpecName "kube-api-access-j6wlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpmqj" event={"ID":"cb993f64-fe54-4fed-9aca-68e11a71eee7","Type":"ContainerDied","Data":"0fd5b95bfcdbd17444106a7582b0350a2e25cba6b6dd5d34c5e4561367384086"} Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851238 4829 scope.go:117] "RemoveContainer" containerID="c9ddeefd1963cd3f9a56a0ba38a667904fbf10048a6338192e1645e89abfd8b6" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.851432 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpmqj" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.903043 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6wlc\" (UniqueName: \"kubernetes.io/projected/dcdf2448-5ccb-4351-b022-de49263fd521-kube-api-access-j6wlc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:19 crc kubenswrapper[4829]: I0217 16:19:19.903076 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdf2448-5ccb-4351-b022-de49263fd521-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.015533 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.057540 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jpmqj"] Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.220057 4829 scope.go:117] "RemoveContainer" containerID="bcac7d642dcdb322f81face8120317f047352869a42e4933796745c4aa43f357" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.296690 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" path="/var/lib/kubelet/pods/cb993f64-fe54-4fed-9aca-68e11a71eee7/volumes" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.497977 4829 scope.go:117] "RemoveContainer" containerID="aed45633f60d99541ba038e78c0b2e0b374afd5ea7aac8938d63a404f1ffb1c7" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.573365 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621257 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621674 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.621780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622131 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.622301 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"c3f146bc-ed08-462a-9c4a-f5641b460469\" (UID: \"c3f146bc-ed08-462a-9c4a-f5641b460469\") " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.624170 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs" (OuterVolumeSpecName: "logs") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.627196 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.667776 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts" (OuterVolumeSpecName: "scripts") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.667858 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk" (OuterVolumeSpecName: "kube-api-access-rsjdk") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "kube-api-access-rsjdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.677846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725542 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725572 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsjdk\" (UniqueName: \"kubernetes.io/projected/c3f146bc-ed08-462a-9c4a-f5641b460469-kube-api-access-rsjdk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725593 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725601 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.725610 4829 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3f146bc-ed08-462a-9c4a-f5641b460469-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.773683 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (OuterVolumeSpecName: "glance") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "pvc-60154460-e4e5-447b-9d26-02e14a9d8490". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.798513 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data" (OuterVolumeSpecName: "config-data") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.827833 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.828406 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" " Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.870809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.870871 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.877602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerStarted","Data":"28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.877738 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.885855 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3f146bc-ed08-462a-9c4a-f5641b460469","Type":"ContainerDied","Data":"c8e81e7e1defbd153394d4646231aa0526f50eda26bb5fe7533fac1512aa59a1"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.885923 4829 scope.go:117] "RemoveContainer" containerID="53f1e0f969060d3a33c6a5962edc0a76f2003ac98cc82582a735a27ab0ead2d5" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.886067 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.888417 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podStartSLOduration=8.036838752 podStartE2EDuration="12.888399621s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.34904626 +0000 UTC m=+1466.766064238" lastFinishedPulling="2026-02-17 16:19:19.200607129 +0000 UTC m=+1471.617625107" observedRunningTime="2026-02-17 16:19:20.888254027 +0000 UTC m=+1473.305272005" watchObservedRunningTime="2026-02-17 16:19:20.888399621 +0000 UTC m=+1473.305417599" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.890842 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerStarted","Data":"7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.891094 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-58844cd98c-2snd2" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" containerID="cri-o://7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" gracePeriod=60 Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.891476 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.899977 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7"} Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.900254 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.912747 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" podStartSLOduration=19.912730718 podStartE2EDuration="19.912730718s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.908833493 +0000 UTC m=+1473.325851471" watchObservedRunningTime="2026-02-17 16:19:20.912730718 +0000 UTC m=+1473.329748696" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.929645 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.929793 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-60154460-e4e5-447b-9d26-02e14a9d8490" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490") on node "crc" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.930493 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.946785 4829 scope.go:117] "RemoveContainer" containerID="c397bd2749a8ef209d6ee69f8792dcf0366d749e2a56b0ef8cdf66f338149501" Feb 17 16:19:20 crc kubenswrapper[4829]: I0217 16:19:20.990144 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-58844cd98c-2snd2" podStartSLOduration=15.306071983 podStartE2EDuration="19.990122978s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.511804065 +0000 UTC m=+1466.928822043" lastFinishedPulling="2026-02-17 16:19:19.19585506 +0000 UTC m=+1471.612873038" observedRunningTime="2026-02-17 16:19:20.947297781 +0000 UTC m=+1473.364315759" watchObservedRunningTime="2026-02-17 16:19:20.990122978 +0000 UTC m=+1473.407140966" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.001772 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-647dbf4b4b-fgckf" podStartSLOduration=7.165747022 podStartE2EDuration="13.001748582s" podCreationTimestamp="2026-02-17 16:19:08 +0000 UTC" firstStartedPulling="2026-02-17 16:19:13.323716064 +0000 UTC m=+1465.740734042" lastFinishedPulling="2026-02-17 16:19:19.159717624 +0000 UTC m=+1471.576735602" observedRunningTime="2026-02-17 16:19:20.963698134 +0000 UTC m=+1473.380716112" watchObservedRunningTime="2026-02-17 16:19:21.001748582 +0000 UTC m=+1473.418766560" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.089517 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3f146bc-ed08-462a-9c4a-f5641b460469" (UID: "c3f146bc-ed08-462a-9c4a-f5641b460469"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.135219 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3f146bc-ed08-462a-9c4a-f5641b460469-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.306143 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.316941 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332091 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332589 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332606 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332617 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332633 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332667 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-content" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332674 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-content" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332685 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332691 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332701 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-utilities" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332707 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="extract-utilities" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332714 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332721 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332735 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332741 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332758 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332763 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332772 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332778 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332788 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332794 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: E0217 16:19:21.332802 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.332808 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333005 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb993f64-fe54-4fed-9aca-68e11a71eee7" containerName="registry-server" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333020 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333033 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="544f59e2-daea-45db-99b4-d9714f620a74" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333044 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-httpd" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333052 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333059 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" containerName="glance-log" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333068 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333078 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" containerName="mariadb-database-create" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.333090 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" containerName="mariadb-account-create-update" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.334358 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.337809 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.364269 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.366845 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.440817 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441306 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441451 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441766 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441793 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441867 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.441905 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544486 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544564 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544695 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544733 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544810 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544825 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.544851 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.545274 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-logs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.545482 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/417e614d-4be6-439c-9fbc-65e970d1614f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.549774 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-config-data\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.550142 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.550171 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8f70a9e1e50c522452a5e14066ef931b1a337b1d311426f427b4354159fee773/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.562839 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.563294 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-scripts\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.568220 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417e614d-4be6-439c-9fbc-65e970d1614f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.582028 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r865r\" (UniqueName: \"kubernetes.io/projected/417e614d-4be6-439c-9fbc-65e970d1614f-kube-api-access-r865r\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.684624 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-60154460-e4e5-447b-9d26-02e14a9d8490\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-60154460-e4e5-447b-9d26-02e14a9d8490\") pod \"glance-default-external-api-0\" (UID: \"417e614d-4be6-439c-9fbc-65e970d1614f\") " pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.913827 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"816bca39-deec-496c-bb97-40d4ad4ca878","Type":"ContainerStarted","Data":"924d00ed836b571c32d69ecb057ea48470718059438d6e5408ef3d836d3a7a0e"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.914314 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.915414 4829 generic.go:334] "Generic (PLEG): container finished" podID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" exitCode=1 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.915475 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.916116 4829 scope.go:117] "RemoveContainer" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.916845 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7bf669c95c-g7msn" event={"ID":"be43e34b-d8ec-44cd-bc26-e0ce3c9797a7","Type":"ContainerStarted","Data":"afa64044d9cc839b7e18d702eea2f9ae926189a112c5e5299c5ac2d9b45e2db9"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.917054 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924150 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerStarted","Data":"04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924283 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.924291 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" containerID="cri-o://04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" gracePeriod=60 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927122 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" exitCode=1 Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.927891 4829 scope.go:117] "RemoveContainer" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.930888 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4708c572-1818-4307-8667-0e2cb60f5635","Type":"ContainerStarted","Data":"fce6ee49837f18aeb4ef673987697711ac43588da91bd68ab4cd453076fb5ec7"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.935972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" event={"ID":"5dfe4b1a-5f10-47f3-ab81-0807c468fab0","Type":"ContainerStarted","Data":"8c1c1354bf0b94e8c9c24f6c40dda3774dc832ece8aab327d939ab39a2f29b5e"} Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.936647 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.943270 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.943253283 podStartE2EDuration="8.943253283s" podCreationTimestamp="2026-02-17 16:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:21.93017124 +0000 UTC m=+1474.347189218" watchObservedRunningTime="2026-02-17 16:19:21.943253283 +0000 UTC m=+1474.360271261" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.964767 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:19:21 crc kubenswrapper[4829]: I0217 16:19:21.967811 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" podStartSLOduration=16.332165718 podStartE2EDuration="20.967794386s" podCreationTimestamp="2026-02-17 16:19:01 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.594709373 +0000 UTC m=+1467.011727351" lastFinishedPulling="2026-02-17 16:19:19.230338041 +0000 UTC m=+1471.647356019" observedRunningTime="2026-02-17 16:19:21.956021089 +0000 UTC m=+1474.373039067" watchObservedRunningTime="2026-02-17 16:19:21.967794386 +0000 UTC m=+1474.384812364" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.029013 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7bf669c95c-g7msn" podStartSLOduration=7.436842465 podStartE2EDuration="12.028996929s" podCreationTimestamp="2026-02-17 16:19:10 +0000 UTC" firstStartedPulling="2026-02-17 16:19:14.603716576 +0000 UTC m=+1467.020734554" lastFinishedPulling="2026-02-17 16:19:19.19587104 +0000 UTC m=+1471.612889018" observedRunningTime="2026-02-17 16:19:22.025366621 +0000 UTC m=+1474.442384619" watchObservedRunningTime="2026-02-17 16:19:22.028996929 +0000 UTC m=+1474.446014907" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.061486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" podStartSLOduration=6.187573284 podStartE2EDuration="12.061468516s" podCreationTimestamp="2026-02-17 16:19:10 +0000 UTC" firstStartedPulling="2026-02-17 16:19:13.324377373 +0000 UTC m=+1465.741395351" lastFinishedPulling="2026-02-17 16:19:19.198272605 +0000 UTC m=+1471.615290583" observedRunningTime="2026-02-17 16:19:22.055880995 +0000 UTC m=+1474.472898983" watchObservedRunningTime="2026-02-17 16:19:22.061468516 +0000 UTC m=+1474.478486494" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.108799 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.108780673 podStartE2EDuration="9.108780673s" podCreationTimestamp="2026-02-17 16:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:22.095168485 +0000 UTC m=+1474.512186463" watchObservedRunningTime="2026-02-17 16:19:22.108780673 +0000 UTC m=+1474.525798651" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.301207 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f146bc-ed08-462a-9c4a-f5641b460469" path="/var/lib/kubelet/pods/c3f146bc-ed08-462a-9c4a-f5641b460469/volumes" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.309346 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.309399 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d69d97dcf-pdd69" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.430534 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.430916 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.703539 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.949510 4829 generic.go:334] "Generic (PLEG): container finished" podID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerID="04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" exitCode=0 Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.949588 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerDied","Data":"04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1"} Feb 17 16:19:22 crc kubenswrapper[4829]: I0217 16:19:22.951285 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"ef332962cfbb0da0428cedc06ffb50074342b92fa6e7baf8ac870434bd9e9166"} Feb 17 16:19:23 crc kubenswrapper[4829]: E0217 16:19:23.562499 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.968643 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.970301 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.986413 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.992386 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wx8s7" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.994106 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:19:23 crc kubenswrapper[4829]: I0217 16:19:23.994240 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030175 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030258 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030315 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.030384 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.037964 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.039335 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.078370 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.117117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132395 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132753 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.132896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.133104 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.142811 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.146379 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.153402 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.155242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"nova-cell0-conductor-db-sync-f9vr7\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.291367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.400876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.425326 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.965026 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.975709 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.975999 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.976122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.976183 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") pod \"531a6d2a-8cc6-4d30-a906-826fba92e926\" (UID: \"531a6d2a-8cc6-4d30-a906-826fba92e926\") " Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.982742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:24 crc kubenswrapper[4829]: I0217 16:19:24.989709 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk" (OuterVolumeSpecName: "kube-api-access-pqzqk") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "kube-api-access-pqzqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.030655 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052157 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" event={"ID":"531a6d2a-8cc6-4d30-a906-826fba92e926","Type":"ContainerDied","Data":"4c902dfc5a7a0797ee28e5b2f0e7c7e7ec51425e7920c7c93ab08f2fe74d875d"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052467 4829 scope.go:117] "RemoveContainer" containerID="04743b4594d4cb733a9f9aee2a9565e66b46b6e3e63b0429e85b69b48f18ecc1" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.052589 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6b59579d-8dd2k" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.059885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerStarted","Data":"e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.061689 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078884 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078924 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqzqk\" (UniqueName: \"kubernetes.io/projected/531a6d2a-8cc6-4d30-a906-826fba92e926-kube-api-access-pqzqk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.078935 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.079763 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerStarted","Data":"9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f"} Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080542 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.080878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data" (OuterVolumeSpecName: "config-data") pod "531a6d2a-8cc6-4d30-a906-826fba92e926" (UID: "531a6d2a-8cc6-4d30-a906-826fba92e926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.181214 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/531a6d2a-8cc6-4d30-a906-826fba92e926-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.396038 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.407447 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b6b59579d-8dd2k"] Feb 17 16:19:25 crc kubenswrapper[4829]: I0217 16:19:25.449044 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:19:25 crc kubenswrapper[4829]: W0217 16:19:25.453808 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443 WatchSource:0}: Error finding container 3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443: Status 404 returned error can't find the container with id 3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443 Feb 17 16:19:25 crc kubenswrapper[4829]: E0217 16:19:25.677385 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.098209 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"5c68e78e9dafd8fee502c806ca62674bf75ddb93f865f78af0d551b191fab20f"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.099535 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"417e614d-4be6-439c-9fbc-65e970d1614f","Type":"ContainerStarted","Data":"22b2d64ca7d7156d906cb52a8ed5f292f4386365304da910768ec0db2d4c0335"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102588 4829 generic.go:334] "Generic (PLEG): container finished" podID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" exitCode=1 Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102686 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.102721 4829 scope.go:117] "RemoveContainer" containerID="b32c3a4f873e18355f5599d04fa7c0984cf4ec0571e6b86e8b3a211ecc3876a9" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.103657 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:26 crc kubenswrapper[4829]: E0217 16:19:26.104004 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.110788 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerStarted","Data":"3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.125942 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" exitCode=1 Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.128018 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235"} Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.128074 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:26 crc kubenswrapper[4829]: E0217 16:19:26.129259 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647dbf4b4b-fgckf_openstack(cbedef6f-85e8-418a-b925-8d2a8e73bb5c)\"" pod="openstack/heat-api-647dbf4b4b-fgckf" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.157670 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.157643927 podStartE2EDuration="5.157643927s" podCreationTimestamp="2026-02-17 16:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:26.128208033 +0000 UTC m=+1478.545226011" watchObservedRunningTime="2026-02-17 16:19:26.157643927 +0000 UTC m=+1478.574661925" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.306994 4829 scope.go:117] "RemoveContainer" containerID="24a199a2ad6b19d28caaf2023a8fa281e1607631e7ef36f2236db9885f749db7" Feb 17 16:19:26 crc kubenswrapper[4829]: I0217 16:19:26.411306 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" path="/var/lib/kubelet/pods/531a6d2a-8cc6-4d30-a906-826fba92e926/volumes" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.031737 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.110424 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.110707 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" containerID="cri-o://0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" gracePeriod=10 Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.168827 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:27 crc kubenswrapper[4829]: E0217 16:19:27.169073 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.176833 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.176847 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:27 crc kubenswrapper[4829]: E0217 16:19:27.177030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-647dbf4b4b-fgckf_openstack(cbedef6f-85e8-418a-b925-8d2a8e73bb5c)\"" pod="openstack/heat-api-647dbf4b4b-fgckf" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.836453 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967103 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967192 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967252 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967358 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:27 crc kubenswrapper[4829]: I0217 16:19:27.967550 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") pod \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\" (UID: \"24a26c9f-0ba5-4714-9b6e-5319f3ed903a\") " Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:27.984275 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx" (OuterVolumeSpecName: "kube-api-access-9ffxx") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "kube-api-access-9ffxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.045386 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.063823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.067089 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.076087 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config" (OuterVolumeSpecName: "config") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.077266 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24a26c9f-0ba5-4714-9b6e-5319f3ed903a" (UID: "24a26c9f-0ba5-4714-9b6e-5319f3ed903a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.087494 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093293 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093336 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093350 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093362 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093373 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffxx\" (UniqueName: \"kubernetes.io/projected/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-kube-api-access-9ffxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.093384 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24a26c9f-0ba5-4714-9b6e-5319f3ed903a-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196863 4829 generic.go:334] "Generic (PLEG): container finished" podID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" exitCode=0 Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" event={"ID":"24a26c9f-0ba5-4714-9b6e-5319f3ed903a","Type":"ContainerDied","Data":"25c76158cbbd089e89beb231349a135df7ab735e2a004c66b802c8527397a342"} Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.196984 4829 scope.go:117] "RemoveContainer" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.197117 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-5skss" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.236155 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.249777 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-5skss"] Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.289929 4829 scope.go:117] "RemoveContainer" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.308257 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" path="/var/lib/kubelet/pods/24a26c9f-0ba5-4714-9b6e-5319f3ed903a/volumes" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.342249 4829 scope.go:117] "RemoveContainer" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: E0217 16:19:28.343037 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": container with ID starting with 0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12 not found: ID does not exist" containerID="0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343139 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12"} err="failed to get container status \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": rpc error: code = NotFound desc = could not find container \"0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12\": container with ID starting with 0fd8417623befac245a1034c94f9ee7696378881ed129073eef28852f3960e12 not found: ID does not exist" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343222 4829 scope.go:117] "RemoveContainer" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: E0217 16:19:28.343523 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": container with ID starting with 8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022 not found: ID does not exist" containerID="8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.343746 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022"} err="failed to get container status \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": rpc error: code = NotFound desc = could not find container \"8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022\": container with ID starting with 8af2319ddfcb7c165da732a9608bd02726610d39ce248de06d98b945884a8022 not found: ID does not exist" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.917517 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:19:28 crc kubenswrapper[4829]: I0217 16:19:28.927739 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7bf669c95c-g7msn" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.050954 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.400841 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.401107 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.401937 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.402217 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d5f4d8b58-jzbm7_openstack(54ae6e91-44b3-4b86-9d98-ff9d0b0624ca)\"" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.474373 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7db87d5bbf-dtdjh" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.556996 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.557196 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" containerID="cri-o://3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" gracePeriod=60 Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.568059 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.568146 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.575240 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.583812 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.593609 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.624729 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:29 crc kubenswrapper[4829]: E0217 16:19:29.624795 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.761361 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970045 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970122 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970201 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.970243 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") pod \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\" (UID: \"cbedef6f-85e8-418a-b925-8d2a8e73bb5c\") " Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.980856 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9" (OuterVolumeSpecName: "kube-api-access-cqhk9") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "kube-api-access-cqhk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:29 crc kubenswrapper[4829]: I0217 16:19:29.993772 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.025304 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073084 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhk9\" (UniqueName: \"kubernetes.io/projected/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-kube-api-access-cqhk9\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073115 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.073124 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.118410 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data" (OuterVolumeSpecName: "config-data") pod "cbedef6f-85e8-418a-b925-8d2a8e73bb5c" (UID: "cbedef6f-85e8-418a-b925-8d2a8e73bb5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.175558 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbedef6f-85e8-418a-b925-8d2a8e73bb5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.230851 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-647dbf4b4b-fgckf" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.236865 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-647dbf4b4b-fgckf" event={"ID":"cbedef6f-85e8-418a-b925-8d2a8e73bb5c","Type":"ContainerDied","Data":"9b7829ddff737dae110188099ffcfcca290e157b306ee21c83290ddc54364056"} Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.236966 4829 scope.go:117] "RemoveContainer" containerID="e1df845890dbbdb9d64aacd017b5cfa66689bda16fcadd3bbd30947d55fb5235" Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.273515 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:30 crc kubenswrapper[4829]: I0217 16:19:30.306244 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-647dbf4b4b-fgckf"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.706810 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.710954 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.715890 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 16:19:31 crc kubenswrapper[4829]: E0217 16:19:31.715927 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75c6bfd58d-6ndtv" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:31 crc kubenswrapper[4829]: I0217 16:19:31.966159 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:19:31 crc kubenswrapper[4829]: I0217 16:19:31.966227 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.027033 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.046197 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.259526 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.259561 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.303112 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" path="/var/lib/kubelet/pods/cbedef6f-85e8-418a-b925-8d2a8e73bb5c/volumes" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.891127 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-66bc7b8984-mg8sc" Feb 17 16:19:32 crc kubenswrapper[4829]: I0217 16:19:32.996147 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:33 crc kubenswrapper[4829]: I0217 16:19:33.920784 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:19:34 crc kubenswrapper[4829]: I0217 16:19:34.031822 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="816bca39-deec-496c-bb97-40d4ad4ca878" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.305901 4829 generic.go:334] "Generic (PLEG): container finished" podID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" exitCode=0 Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.305991 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerDied","Data":"3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa"} Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.851613 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.851693 4829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:19:35 crc kubenswrapper[4829]: I0217 16:19:35.855164 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:19:36 crc kubenswrapper[4829]: E0217 16:19:36.182985 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:38 crc kubenswrapper[4829]: E0217 16:19:38.263382 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.821510 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.936645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.936744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.937033 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.937064 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") pod \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\" (UID: \"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca\") " Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.944806 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.969886 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4" (OuterVolumeSpecName: "kube-api-access-mc9n4") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "kube-api-access-mc9n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:39 crc kubenswrapper[4829]: I0217 16:19:39.988823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.029917 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data" (OuterVolumeSpecName: "config-data") pod "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" (UID: "54ae6e91-44b3-4b86-9d98-ff9d0b0624ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040361 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc9n4\" (UniqueName: \"kubernetes.io/projected/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-kube-api-access-mc9n4\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040390 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040945 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.040957 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382545 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" event={"ID":"54ae6e91-44b3-4b86-9d98-ff9d0b0624ca","Type":"ContainerDied","Data":"95375bc6f346a6fe6af46463b8db7c53fa38cd84c3783df66e0720a068bc27d4"} Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382853 4829 scope.go:117] "RemoveContainer" containerID="9fc8e93d1ee838a2f8372529bb68c44066d6996244e0cfb5cebd41c0e3dbd78f" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.382651 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d5f4d8b58-jzbm7" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.415367 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.420824 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.428061 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d5f4d8b58-jzbm7"] Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550513 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550607 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.550637 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") pod \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\" (UID: \"8f1cb833-fb61-463d-a2d4-c14d51370dc9\") " Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.555810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg" (OuterVolumeSpecName: "kube-api-access-bckkg") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "kube-api-access-bckkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.555901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.580269 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.607317 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data" (OuterVolumeSpecName: "config-data") pod "8f1cb833-fb61-463d-a2d4-c14d51370dc9" (UID: "8f1cb833-fb61-463d-a2d4-c14d51370dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.653966 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bckkg\" (UniqueName: \"kubernetes.io/projected/8f1cb833-fb61-463d-a2d4-c14d51370dc9-kube-api-access-bckkg\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654212 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654290 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:40 crc kubenswrapper[4829]: I0217 16:19:40.654354 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f1cb833-fb61-463d-a2d4-c14d51370dc9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.408442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerStarted","Data":"56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029"} Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410131 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75c6bfd58d-6ndtv" event={"ID":"8f1cb833-fb61-463d-a2d4-c14d51370dc9","Type":"ContainerDied","Data":"ad768e518034fae299e9c917a36a527e20f09615bf89f800e1faf24578b3afd0"} Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410171 4829 scope.go:117] "RemoveContainer" containerID="3938d3da9ed947bc75e2440aba26114a8b099d9177938c14f65bc57eae8dc0aa" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.410273 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75c6bfd58d-6ndtv" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.434380 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" podStartSLOduration=3.361207183 podStartE2EDuration="18.434348814s" podCreationTimestamp="2026-02-17 16:19:23 +0000 UTC" firstStartedPulling="2026-02-17 16:19:25.456873856 +0000 UTC m=+1477.873891844" lastFinishedPulling="2026-02-17 16:19:40.530015497 +0000 UTC m=+1492.947033475" observedRunningTime="2026-02-17 16:19:41.433976715 +0000 UTC m=+1493.850994713" watchObservedRunningTime="2026-02-17 16:19:41.434348814 +0000 UTC m=+1493.851366792" Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.465862 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:41 crc kubenswrapper[4829]: I0217 16:19:41.479312 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-75c6bfd58d-6ndtv"] Feb 17 16:19:42 crc kubenswrapper[4829]: I0217 16:19:42.291994 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" path="/var/lib/kubelet/pods/54ae6e91-44b3-4b86-9d98-ff9d0b0624ca/volumes" Feb 17 16:19:42 crc kubenswrapper[4829]: I0217 16:19:42.293250 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" path="/var/lib/kubelet/pods/8f1cb833-fb61-463d-a2d4-c14d51370dc9/volumes" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.442825 4829 generic.go:334] "Generic (PLEG): container finished" podID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerID="8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" exitCode=137 Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.443149 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029"} Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.641088 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.719187 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720077 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720234 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720295 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720323 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720385 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") pod \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\" (UID: \"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0\") " Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720593 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.720981 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.721043 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.725822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk" (OuterVolumeSpecName: "kube-api-access-dtcqk") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "kube-api-access-dtcqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.726134 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts" (OuterVolumeSpecName: "scripts") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.775510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.813903 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825015 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825067 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825082 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825094 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcqk\" (UniqueName: \"kubernetes.io/projected/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-kube-api-access-dtcqk\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.825117 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.852810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data" (OuterVolumeSpecName: "config-data") pod "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" (UID: "5b3fb6d4-3173-435d-bf9e-bc6cde0301b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:43 crc kubenswrapper[4829]: I0217 16:19:43.926956 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.456294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3fb6d4-3173-435d-bf9e-bc6cde0301b0","Type":"ContainerDied","Data":"b15b8a2c2fe4022bce337bd6c570aad6d1fe85a99014bfa877c56e943e1fb42f"} Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.457041 4829 scope.go:117] "RemoveContainer" containerID="8c127b3f2886b908bf515dd23cedb507644262f632d9e24df26b2f62aec67029" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.456419 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.504812 4829 scope.go:117] "RemoveContainer" containerID="8333ce04379c8d4602c0e5c295f814d5bdd9be8704057ba17e1e2bb10774216f" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.507607 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.520490 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.549503 4829 scope.go:117] "RemoveContainer" containerID="8e1ec495e69b883464e261824c72d1242cc93f566989a36e76f8d91490b3c8b3" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.549694 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550266 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550287 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550307 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="init" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550314 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="init" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550325 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550332 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550342 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550347 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550358 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550363 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550378 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550384 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550399 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550404 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550413 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550418 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550433 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550439 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550454 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550461 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550472 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550478 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550685 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550696 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550707 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-central-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550720 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="proxy-httpd" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550730 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550745 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1cb833-fb61-463d-a2d4-c14d51370dc9" containerName="heat-engine" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550754 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="sg-core" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550764 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a26c9f-0ba5-4714-9b6e-5319f3ed903a" containerName="dnsmasq-dns" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550771 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" containerName="ceilometer-notification-agent" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="531a6d2a-8cc6-4d30-a906-826fba92e926" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: E0217 16:19:44.550970 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.550977 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ae6e91-44b3-4b86-9d98-ff9d0b0624ca" containerName="heat-cfnapi" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.551194 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbedef6f-85e8-418a-b925-8d2a8e73bb5c" containerName="heat-api" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.553234 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.556360 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.556527 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.566621 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.590054 4829 scope.go:117] "RemoveContainer" containerID="1aa22b6c49ca73d43c1dce5ccec05650a2df7b039bb8de72cbf7d54e697b15b4" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643119 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643542 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643771 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.643962 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644116 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644290 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.644420 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.759747 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760084 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760773 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.761083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.761298 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.760534 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.763854 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.764521 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.765943 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.766783 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.769826 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.780254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"ceilometer-0\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " pod="openstack/ceilometer-0" Feb 17 16:19:44 crc kubenswrapper[4829]: I0217 16:19:44.881613 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:19:45 crc kubenswrapper[4829]: I0217 16:19:45.386260 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:45 crc kubenswrapper[4829]: W0217 16:19:45.391803 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9 WatchSource:0}: Error finding container 2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9: Status 404 returned error can't find the container with id 2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9 Feb 17 16:19:45 crc kubenswrapper[4829]: I0217 16:19:45.473527 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9"} Feb 17 16:19:46 crc kubenswrapper[4829]: I0217 16:19:46.298243 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3fb6d4-3173-435d-bf9e-bc6cde0301b0" path="/var/lib/kubelet/pods/5b3fb6d4-3173-435d-bf9e-bc6cde0301b0/volumes" Feb 17 16:19:46 crc kubenswrapper[4829]: I0217 16:19:46.493842 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d"} Feb 17 16:19:46 crc kubenswrapper[4829]: E0217 16:19:46.544613 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:47 crc kubenswrapper[4829]: I0217 16:19:47.574007 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.111306 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.111434 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:19:48 crc kubenswrapper[4829]: E0217 16:19:48.579690 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54ddd6397355b84a6538404b7e0b74cacc0798f30ad9a6fdc63f5d6f25040eae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54ddd6397355b84a6538404b7e0b74cacc0798f30ad9a6fdc63f5d6f25040eae/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log" to get inode usage: stat /var/log/pods/openstack_neutron-b56799c5b-dmgjh_75783ffe-a672-4585-ae18-3c162d659ee7/neutron-api/0.log: no such file or directory Feb 17 16:19:48 crc kubenswrapper[4829]: I0217 16:19:48.661952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004"} Feb 17 16:19:49 crc kubenswrapper[4829]: I0217 16:19:49.673950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d"} Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.449325 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod544f59e2-daea-45db-99b4-d9714f620a74"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : Timed out while waiting for systemd to remove kubepods-besteffort-pod544f59e2_daea_45db_99b4_d9714f620a74.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.449650 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod544f59e2-daea-45db-99b4-d9714f620a74] : Timed out while waiting for systemd to remove kubepods-besteffort-pod544f59e2_daea_45db_99b4_d9714f620a74.slice" pod="openstack/nova-cell0-db-create-cnfbw" podUID="544f59e2-daea-45db-99b4-d9714f620a74" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.456109 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc909da16-2d5d-4706-adb8-f8402ed9f01e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : Timed out while waiting for systemd to remove kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.456139 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc909da16-2d5d-4706-adb8-f8402ed9f01e] : Timed out while waiting for systemd to remove kubepods-besteffort-podc909da16_2d5d_4706_adb8_f8402ed9f01e.slice" pod="openstack/nova-cell1-3357-account-create-update-rg852" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.566646 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddcdf2448-5ccb-4351-b022-de49263fd521"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : Timed out while waiting for systemd to remove kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice" Feb 17 16:19:50 crc kubenswrapper[4829]: E0217 16:19:50.566724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : unable to destroy cgroup paths for cgroup [kubepods besteffort poddcdf2448-5ccb-4351-b022-de49263fd521] : Timed out while waiting for systemd to remove kubepods-besteffort-poddcdf2448_5ccb_4351_b022_de49263fd521.slice" pod="openstack/nova-api-db-create-cglz5" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerStarted","Data":"314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d"} Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687512 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cnfbw" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687586 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cglz5" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687838 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" containerID="cri-o://c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687866 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" containerID="cri-o://7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687889 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" containerID="cri-o://314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.687908 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" containerID="cri-o://82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" gracePeriod=30 Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.689690 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3357-account-create-update-rg852" Feb 17 16:19:50 crc kubenswrapper[4829]: I0217 16:19:50.726900 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1724717 podStartE2EDuration="6.726883061s" podCreationTimestamp="2026-02-17 16:19:44 +0000 UTC" firstStartedPulling="2026-02-17 16:19:45.399751066 +0000 UTC m=+1497.816769044" lastFinishedPulling="2026-02-17 16:19:49.954162427 +0000 UTC m=+1502.371180405" observedRunningTime="2026-02-17 16:19:50.715527073 +0000 UTC m=+1503.132545081" watchObservedRunningTime="2026-02-17 16:19:50.726883061 +0000 UTC m=+1503.143901039" Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.708831 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" exitCode=2 Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709178 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" exitCode=0 Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d"} Feb 17 16:19:51 crc kubenswrapper[4829]: I0217 16:19:51.709223 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004"} Feb 17 16:19:52 crc kubenswrapper[4829]: I0217 16:19:52.424897 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:52 crc kubenswrapper[4829]: I0217 16:19:52.424969 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:55 crc kubenswrapper[4829]: I0217 16:19:55.753217 4829 generic.go:334] "Generic (PLEG): container finished" podID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerID="56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029" exitCode=0 Feb 17 16:19:55 crc kubenswrapper[4829]: I0217 16:19:55.753300 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerDied","Data":"56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029"} Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.381187 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500810 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500855 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.500988 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.501111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") pod \"70d00488-ed97-4f10-bf11-7c57e5a4d631\" (UID: \"70d00488-ed97-4f10-bf11-7c57e5a4d631\") " Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.513212 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts" (OuterVolumeSpecName: "scripts") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.530869 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8" (OuterVolumeSpecName: "kube-api-access-qxbn8") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "kube-api-access-qxbn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.539180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data" (OuterVolumeSpecName: "config-data") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.551739 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d00488-ed97-4f10-bf11-7c57e5a4d631" (UID: "70d00488-ed97-4f10-bf11-7c57e5a4d631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604661 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604702 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604715 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d00488-ed97-4f10-bf11-7c57e5a4d631-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.604732 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbn8\" (UniqueName: \"kubernetes.io/projected/70d00488-ed97-4f10-bf11-7c57e5a4d631-kube-api-access-qxbn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.784911 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" event={"ID":"70d00488-ed97-4f10-bf11-7c57e5a4d631","Type":"ContainerDied","Data":"3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443"} Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.784966 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3091accc460847ddf52aaf163732a70bfef2ace206047b41aac74f94efe5e443" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.785018 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f9vr7" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.998702 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:57 crc kubenswrapper[4829]: E0217 16:19:57.999273 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.999298 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:57 crc kubenswrapper[4829]: I0217 16:19:57.999633 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" containerName="nova-cell0-conductor-db-sync" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.000631 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.007323 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.007437 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wx8s7" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.028071 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.131970 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234226 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234350 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.234480 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.239954 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.240456 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f709715-5e80-4988-8eb5-8bebcd673c47-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.253369 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-225nl\" (UniqueName: \"kubernetes.io/projected/8f709715-5e80-4988-8eb5-8bebcd673c47-kube-api-access-225nl\") pod \"nova-cell0-conductor-0\" (UID: \"8f709715-5e80-4988-8eb5-8bebcd673c47\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.329307 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:58 crc kubenswrapper[4829]: I0217 16:19:58.847639 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.809437 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f709715-5e80-4988-8eb5-8bebcd673c47","Type":"ContainerStarted","Data":"a5e36fd99e6e1002c5aa09f39496be5e7c16a987518bd8109a7d05cf53f78d75"} Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.809719 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f709715-5e80-4988-8eb5-8bebcd673c47","Type":"ContainerStarted","Data":"01609de051e1a240873448cf104c457d0bf876c7f3f7a4bba0b63795466bbf67"} Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.810289 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:19:59 crc kubenswrapper[4829]: I0217 16:19:59.837164 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.837141612 podStartE2EDuration="2.837141612s" podCreationTimestamp="2026-02-17 16:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:59.823730608 +0000 UTC m=+1512.240748576" watchObservedRunningTime="2026-02-17 16:19:59.837141612 +0000 UTC m=+1512.254159610" Feb 17 16:20:00 crc kubenswrapper[4829]: I0217 16:20:00.828351 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" exitCode=0 Feb 17 16:20:00 crc kubenswrapper[4829]: I0217 16:20:00.828433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d"} Feb 17 16:20:04 crc kubenswrapper[4829]: E0217 16:20:04.988133 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/08b1ceb9fd67392961b2a720dc2f4bc336a8a5170c8036f02d370bcb848fc25d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/08b1ceb9fd67392961b2a720dc2f4bc336a8a5170c8036f02d370bcb848fc25d/diff: no such file or directory, extraDiskErr: Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.360241 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.917910 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.919925 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.922563 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.922792 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:20:08 crc kubenswrapper[4829]: I0217 16:20:08.950885 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040527 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.040641 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.136556 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.138188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.142746 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.142865 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.143044 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.143125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.145058 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.178002 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.184506 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.185301 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.191696 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.192675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.196821 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.196892 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"nova-cell0-cell-mapping-7l7ns\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.197270 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.219991 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.241259 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.247656 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.247829 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.248032 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.308134 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.310022 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.315854 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.337960 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.339874 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.343536 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354785 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354846 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.354987 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355107 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.355159 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.362869 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.381731 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.385281 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.386029 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"nova-scheduler-0\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.395278 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.405973 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.408200 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.424067 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.457826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458034 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458100 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458167 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458208 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458273 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458329 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458360 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458387 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.458443 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.462342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.463804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.480342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.492565 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"nova-api-0\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.561951 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562308 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562418 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562442 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562467 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562490 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562514 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562530 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562597 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562648 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.562728 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.563104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.571704 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.579395 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.579956 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.581789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"nova-metadata-0\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.584370 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.585563 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.590004 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664365 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664485 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664531 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664589 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664648 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.664698 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.665292 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667111 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667378 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.667763 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.681809 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"dnsmasq-dns-7877d89589-g5wqn\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.772204 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.813260 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.853204 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.876342 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:09 crc kubenswrapper[4829]: I0217 16:20:09.988189 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.114695 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.367276 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.570391 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.822622 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:10 crc kubenswrapper[4829]: I0217 16:20:10.839759 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.001987 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerStarted","Data":"c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.002038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerStarted","Data":"7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.003917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"21037a41552d2f17b0298eab9cadbade38ca54aa96f604942f870e2e7cef5930"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.006009 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerStarted","Data":"28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.008517 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerStarted","Data":"571dce0f3dca1580b88fc77df97f1e4a84daf42acff7755a8cd9c913181ac9b2"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.011721 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"41f94608f0021132514460f997146b226afa5a638e41f12e1b716a14c00cd14b"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.013090 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerStarted","Data":"0a6baf72f36f68b63d71c5c1e9e99dced488541d38aaf0d4ecd5c3f870c08fd3"} Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.022316 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7l7ns" podStartSLOduration=3.022298034 podStartE2EDuration="3.022298034s" podCreationTimestamp="2026-02-17 16:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:11.016269601 +0000 UTC m=+1523.433287579" watchObservedRunningTime="2026-02-17 16:20:11.022298034 +0000 UTC m=+1523.439316002" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.492491 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.494695 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.498147 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.498383 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.542624 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626559 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626670 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.626770 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729446 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.729533 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.735620 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.735767 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.762276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.772153 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"nova-cell1-conductor-db-sync-xbhtp\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:11 crc kubenswrapper[4829]: I0217 16:20:11.821329 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:12 crc kubenswrapper[4829]: I0217 16:20:12.032478 4829 generic.go:334] "Generic (PLEG): container finished" podID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerID="916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404" exitCode=0 Feb 17 16:20:12 crc kubenswrapper[4829]: I0217 16:20:12.032753 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404"} Feb 17 16:20:13 crc kubenswrapper[4829]: I0217 16:20:13.275351 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:13 crc kubenswrapper[4829]: I0217 16:20:13.286835 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.483651 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.882829 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:14 crc kubenswrapper[4829]: I0217 16:20:14.889915 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.093026 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.093067 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerStarted","Data":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.094857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerStarted","Data":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098539 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098652 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" containerID="cri-o://559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerStarted","Data":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.098619 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" containerID="cri-o://82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.103019 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" gracePeriod=30 Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.103111 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerStarted","Data":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.105993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerStarted","Data":"035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.106034 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerStarted","Data":"428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.109621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerStarted","Data":"09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec"} Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.109789 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.127118 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.51111202 podStartE2EDuration="6.127099311s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.414766114 +0000 UTC m=+1522.831784092" lastFinishedPulling="2026-02-17 16:20:14.030753405 +0000 UTC m=+1526.447771383" observedRunningTime="2026-02-17 16:20:15.113816282 +0000 UTC m=+1527.530834260" watchObservedRunningTime="2026-02-17 16:20:15.127099311 +0000 UTC m=+1527.544117299" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.147382 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.364402747 podStartE2EDuration="6.147362919s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.139949033 +0000 UTC m=+1522.556967011" lastFinishedPulling="2026-02-17 16:20:13.922909205 +0000 UTC m=+1526.339927183" observedRunningTime="2026-02-17 16:20:15.136788213 +0000 UTC m=+1527.553806191" watchObservedRunningTime="2026-02-17 16:20:15.147362919 +0000 UTC m=+1527.564380897" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.158386 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.007402597 podStartE2EDuration="6.158369507s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.856802453 +0000 UTC m=+1523.273820431" lastFinishedPulling="2026-02-17 16:20:14.007769363 +0000 UTC m=+1526.424787341" observedRunningTime="2026-02-17 16:20:15.149154488 +0000 UTC m=+1527.566172456" watchObservedRunningTime="2026-02-17 16:20:15.158369507 +0000 UTC m=+1527.575387485" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.182541 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" podStartSLOduration=4.182522282 podStartE2EDuration="4.182522282s" podCreationTimestamp="2026-02-17 16:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:15.167014601 +0000 UTC m=+1527.584032579" watchObservedRunningTime="2026-02-17 16:20:15.182522282 +0000 UTC m=+1527.599540260" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.184432 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.858654951 podStartE2EDuration="6.184423253s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="2026-02-17 16:20:10.597089661 +0000 UTC m=+1523.014107639" lastFinishedPulling="2026-02-17 16:20:13.922857963 +0000 UTC m=+1526.339875941" observedRunningTime="2026-02-17 16:20:15.179053648 +0000 UTC m=+1527.596071626" watchObservedRunningTime="2026-02-17 16:20:15.184423253 +0000 UTC m=+1527.601441231" Feb 17 16:20:15 crc kubenswrapper[4829]: I0217 16:20:15.226822 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" podStartSLOduration=6.226805161 podStartE2EDuration="6.226805161s" podCreationTimestamp="2026-02-17 16:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:15.19391268 +0000 UTC m=+1527.610930658" watchObservedRunningTime="2026-02-17 16:20:15.226805161 +0000 UTC m=+1527.643823139" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.078374 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.121669 4829 generic.go:334] "Generic (PLEG): container finished" podID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" exitCode=0 Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.121700 4829 generic.go:334] "Generic (PLEG): container finished" podID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" exitCode=143 Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122148 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81822b2e-5592-4ac6-bf30-c8a3f97d7128","Type":"ContainerDied","Data":"41f94608f0021132514460f997146b226afa5a638e41f12e1b716a14c00cd14b"} Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122229 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.122376 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142098 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142213 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.142442 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") pod \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\" (UID: \"81822b2e-5592-4ac6-bf30-c8a3f97d7128\") " Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.146254 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs" (OuterVolumeSpecName: "logs") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.150251 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81822b2e-5592-4ac6-bf30-c8a3f97d7128-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.174727 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.191810 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z" (OuterVolumeSpecName: "kube-api-access-qdz4z") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "kube-api-access-qdz4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.195518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.228822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data" (OuterVolumeSpecName: "config-data") pod "81822b2e-5592-4ac6-bf30-c8a3f97d7128" (UID: "81822b2e-5592-4ac6-bf30-c8a3f97d7128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252056 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdz4z\" (UniqueName: \"kubernetes.io/projected/81822b2e-5592-4ac6-bf30-c8a3f97d7128-kube-api-access-qdz4z\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252717 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.252778 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81822b2e-5592-4ac6-bf30-c8a3f97d7128-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.362966 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.363422 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.363464 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} err="failed to get container status \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.363489 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.364005 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364059 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} err="failed to get container status \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364086 4829 scope.go:117] "RemoveContainer" containerID="559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364460 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75"} err="failed to get container status \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": rpc error: code = NotFound desc = could not find container \"559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75\": container with ID starting with 559ed7cbd750da862c5a97af4d6a965ee1eede9f946483965da5f65157a0df75 not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364480 4829 scope.go:117] "RemoveContainer" containerID="82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.364852 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c"} err="failed to get container status \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": rpc error: code = NotFound desc = could not find container \"82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c\": container with ID starting with 82ca1080d03cd12233858fb5dc7c3a3758fbdb2f3d256629121fe608067cf12c not found: ID does not exist" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.446958 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.457253 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.474447 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.474981 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475000 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: E0217 16:20:16.475030 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475037 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475249 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-metadata" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.475283 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" containerName="nova-metadata-log" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.476526 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.478668 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.479060 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.490879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560174 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560220 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.560876 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.561063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662659 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662708 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662793 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662819 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.662873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.664738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.675345 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.676653 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.679190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.689962 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"nova-metadata-0\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " pod="openstack/nova-metadata-0" Feb 17 16:20:16 crc kubenswrapper[4829]: I0217 16:20:16.796127 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:17 crc kubenswrapper[4829]: I0217 16:20:17.522009 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.159890 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.160390 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"c4d90ff6dc961ef3104c3f1654909960f94137d701493b08670847050b615a45"} Feb 17 16:20:18 crc kubenswrapper[4829]: I0217 16:20:18.324887 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81822b2e-5592-4ac6-bf30-c8a3f97d7128" path="/var/lib/kubelet/pods/81822b2e-5592-4ac6-bf30-c8a3f97d7128/volumes" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.173954 4829 generic.go:334] "Generic (PLEG): container finished" podID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerID="c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447" exitCode=0 Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.174043 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerDied","Data":"c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447"} Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.177079 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerStarted","Data":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.240739 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.240713876 podStartE2EDuration="3.240713876s" podCreationTimestamp="2026-02-17 16:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:19.230602573 +0000 UTC m=+1531.647620551" watchObservedRunningTime="2026-02-17 16:20:19.240713876 +0000 UTC m=+1531.657731894" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.590661 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.590957 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.625942 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.773647 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.773735 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.853938 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.879038 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.975038 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:19 crc kubenswrapper[4829]: I0217 16:20:19.975307 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" containerID="cri-o://28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" gracePeriod=10 Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.212370 4829 generic.go:334] "Generic (PLEG): container finished" podID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerID="28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" exitCode=0 Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.212654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2"} Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.240210 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.241650 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.258806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.266147 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.278836 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.283548 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.292382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.319704 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374754 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374805 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374916 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.374968 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477436 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477504 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477696 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.477797 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.480330 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.481756 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.497873 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"aodh-db-create-zxj99\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.498285 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"aodh-cbfe-account-create-update-bfbsk\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.584339 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.623536 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.861934 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.862423 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.877999 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.898859 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990267 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990406 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990448 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990529 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990560 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990611 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990703 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") pod \"bef56b6a-4a1c-4305-a88d-3654df130c52\" (UID: \"bef56b6a-4a1c-4305-a88d-3654df130c52\") " Feb 17 16:20:20 crc kubenswrapper[4829]: I0217 16:20:20.990744 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") pod \"08208ef6-e99c-4f83-952c-5828df9b7af8\" (UID: \"08208ef6-e99c-4f83-952c-5828df9b7af8\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029265 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts" (OuterVolumeSpecName: "scripts") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029394 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7" (OuterVolumeSpecName: "kube-api-access-rh5d7") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "kube-api-access-rh5d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.029482 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h" (OuterVolumeSpecName: "kube-api-access-fg94h") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "kube-api-access-fg94h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: W0217 16:20:21.067753 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81822b2e_5592_4ac6_bf30_c8a3f97d7128.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81822b2e_5592_4ac6_bf30_c8a3f97d7128.slice: no such file or directory Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093801 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093830 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh5d7\" (UniqueName: \"kubernetes.io/projected/08208ef6-e99c-4f83-952c-5828df9b7af8-kube-api-access-rh5d7\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.093842 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg94h\" (UniqueName: \"kubernetes.io/projected/bef56b6a-4a1c-4305-a88d-3654df130c52-kube-api-access-fg94h\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.098939 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.130626 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data" (OuterVolumeSpecName: "config-data") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.133302 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bef56b6a-4a1c-4305-a88d-3654df130c52" (UID: "bef56b6a-4a1c-4305-a88d-3654df130c52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.163431 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config" (OuterVolumeSpecName: "config") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.190982 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197399 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197423 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197434 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.197444 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bef56b6a-4a1c-4305-a88d-3654df130c52-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.212204 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.244925 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.301094 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.301123 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.315778 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "08208ef6-e99c-4f83-952c-5828df9b7af8" (UID: "08208ef6-e99c-4f83-952c-5828df9b7af8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: E0217 16:20:21.331545 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-conmon-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:21 crc kubenswrapper[4829]: E0217 16:20:21.334655 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-conmon-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-b931b3f3c1f8ae4c35ae362d6e45e3844fc65c9bb809b5a377a51919c5cec4c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d00488_ed97_4f10_bf11_7c57e5a4d631.slice/crio-56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75783ffe_a672_4585_ae18_3c162d659ee7.slice/crio-92f9ad9e39d6586e5adf42a3234116a048880b028d2c0d388d1a65d671ea53e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.361798 4829 generic.go:334] "Generic (PLEG): container finished" podID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerID="314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" exitCode=137 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.361891 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365440 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l7ns" event={"ID":"bef56b6a-4a1c-4305-a88d-3654df130c52","Type":"ContainerDied","Data":"7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365475 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.365538 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l7ns" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.385857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerStarted","Data":"2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.404118 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/08208ef6-e99c-4f83-952c-5828df9b7af8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.410174 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.413720 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lb9kf" event={"ID":"08208ef6-e99c-4f83-952c-5828df9b7af8","Type":"ContainerDied","Data":"d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.413787 4829 scope.go:117] "RemoveContainer" containerID="28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.424847 4829 generic.go:334] "Generic (PLEG): container finished" podID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerID="7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" exitCode=137 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.425632 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerDied","Data":"7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb"} Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.467850 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.488959 4829 scope.go:117] "RemoveContainer" containerID="a012c5a512f8bfe479d215976c52020761d1d15b76063315ffc6b3942392eb4b" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.498602 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518367 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518565 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" containerID="cri-o://bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.518789 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" containerID="cri-o://15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.566730 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.566955 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" containerID="cri-o://960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.567256 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" containerID="cri-o://9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" gracePeriod=30 Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.580476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.603400 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lb9kf"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.608961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609059 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609200 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609241 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609349 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.609376 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") pod \"14067e2a-e82f-44fb-a2df-5b2627647d4c\" (UID: \"14067e2a-e82f-44fb-a2df-5b2627647d4c\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.620042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.620336 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.627174 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd" (OuterVolumeSpecName: "kube-api-access-k67zd") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "kube-api-access-k67zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.627366 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts" (OuterVolumeSpecName: "scripts") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.634466 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712376 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712668 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712678 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14067e2a-e82f-44fb-a2df-5b2627647d4c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.712690 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k67zd\" (UniqueName: \"kubernetes.io/projected/14067e2a-e82f-44fb-a2df-5b2627647d4c-kube-api-access-k67zd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.765653 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.774901 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.796699 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.796806 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.816003 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.870697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data" (OuterVolumeSpecName: "config-data") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.870782 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14067e2a-e82f-44fb-a2df-5b2627647d4c" (UID: "14067e2a-e82f-44fb-a2df-5b2627647d4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919810 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.919987 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") pod \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\" (UID: \"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb\") " Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.920629 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.920648 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14067e2a-e82f-44fb-a2df-5b2627647d4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.934893 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.934926 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm" (OuterVolumeSpecName: "kube-api-access-85jpm") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "kube-api-access-85jpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:21 crc kubenswrapper[4829]: I0217 16:20:21.957082 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.010852 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data" (OuterVolumeSpecName: "config-data") pod "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" (UID: "a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022366 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022390 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jpm\" (UniqueName: \"kubernetes.io/projected/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-kube-api-access-85jpm\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022400 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.022409 4829 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.303356 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.315614 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" path="/var/lib/kubelet/pods/08208ef6-e99c-4f83-952c-5828df9b7af8/volumes" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.424968 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.425018 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.429229 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.430584 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.431072 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" gracePeriod=600 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439726 4829 generic.go:334] "Generic (PLEG): container finished" podID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerID="ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439768 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439791 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerDied","Data":"ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439836 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439885 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.439944 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.440009 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") pod \"288faaff-8af6-4b89-aa56-5789d3b28b37\" (UID: \"288faaff-8af6-4b89-aa56-5789d3b28b37\") " Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.441192 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs" (OuterVolumeSpecName: "logs") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.443752 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/288faaff-8af6-4b89-aa56-5789d3b28b37-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.454859 4829 generic.go:334] "Generic (PLEG): container finished" podID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerID="1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.454924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerDied","Data":"1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.455028 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerStarted","Data":"273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469268 4829 generic.go:334] "Generic (PLEG): container finished" podID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" exitCode=0 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469296 4829 generic.go:334] "Generic (PLEG): container finished" podID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" exitCode=143 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469353 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469379 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"288faaff-8af6-4b89-aa56-5789d3b28b37","Type":"ContainerDied","Data":"c4d90ff6dc961ef3104c3f1654909960f94137d701493b08670847050b615a45"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469776 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.469915 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.475844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr" (OuterVolumeSpecName: "kube-api-access-mmwjr") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "kube-api-access-mmwjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.477222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-58844cd98c-2snd2" event={"ID":"a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb","Type":"ContainerDied","Data":"0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.477317 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-58844cd98c-2snd2" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.491460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14067e2a-e82f-44fb-a2df-5b2627647d4c","Type":"ContainerDied","Data":"2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.491692 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.497671 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" exitCode=143 Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.498325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.505350 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data" (OuterVolumeSpecName: "config-data") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.513741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546781 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546820 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmwjr\" (UniqueName: \"kubernetes.io/projected/288faaff-8af6-4b89-aa56-5789d3b28b37-kube-api-access-mmwjr\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.546836 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.557801 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "288faaff-8af6-4b89-aa56-5789d3b28b37" (UID: "288faaff-8af6-4b89-aa56-5789d3b28b37"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.562695 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.671173 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.687037 4829 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/288faaff-8af6-4b89-aa56-5789d3b28b37-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.763499 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.763885 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.766216 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} err="failed to get container status \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.767147 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.769535 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.769705 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} err="failed to get container status \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.772010 4829 scope.go:117] "RemoveContainer" containerID="9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774143 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f"} err="failed to get container status \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": rpc error: code = NotFound desc = could not find container \"9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f\": container with ID starting with 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774193 4829 scope.go:117] "RemoveContainer" containerID="960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774538 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a"} err="failed to get container status \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": rpc error: code = NotFound desc = could not find container \"960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a\": container with ID starting with 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a not found: ID does not exist" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.774552 4829 scope.go:117] "RemoveContainer" containerID="7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.814787 4829 scope.go:117] "RemoveContainer" containerID="314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.818905 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.845752 4829 scope.go:117] "RemoveContainer" containerID="7613d92efa4acbd8ca5d3dc9f768c89637cad6e24b902e1c7fc2d9c429e1bf0d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.854710 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.869428 4829 scope.go:117] "RemoveContainer" containerID="82a2a54d7251108e065ba8c95ce4220899fdd0065a2bfa32e5332132eb3f8004" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.896753 4829 scope.go:117] "RemoveContainer" containerID="c56835cbf4e241003cf622ce6ef6667ca386e0ae9845114228c997eb7c2e0c0d" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897015 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897501 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="init" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897517 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="init" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897546 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897552 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897567 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897586 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897602 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897607 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897619 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897625 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897634 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897655 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897663 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897670 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897684 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897690 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: E0217 16:20:22.897702 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897707 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897919 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-log" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897936 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" containerName="nova-manage" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897946 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="sg-core" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897958 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="08208ef6-e99c-4f83-952c-5828df9b7af8" containerName="dnsmasq-dns" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897968 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-central-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897982 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" containerName="nova-metadata-metadata" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.897996 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="proxy-httpd" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.898008 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" containerName="heat-api" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.898019 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" containerName="ceilometer-notification-agent" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.900112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.904278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.904375 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.913129 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.925996 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-58844cd98c-2snd2"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.941019 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.961892 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.983883 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.996313 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:22 crc kubenswrapper[4829]: I0217 16:20:22.998806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.000874 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.001506 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.008378 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.048920 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049031 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049060 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049078 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049097 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049291 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.049727 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.151841 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.151988 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152067 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152087 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.152897 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153051 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153384 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153487 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153556 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153609 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.153663 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.156242 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.157184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.157351 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.159914 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.160781 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.170151 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.181975 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"ceilometer-0\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.234422 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255777 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255845 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.255907 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.256110 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.256202 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.257106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.259365 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.262504 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.265060 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.270532 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"nova-metadata-0\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.316644 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555039 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" exitCode=0 Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555430 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" containerID="cri-o://370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" gracePeriod=30 Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555536 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab"} Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.555980 4829 scope.go:117] "RemoveContainer" containerID="1a7ff95adeb7615beb23b58e843015b163a9de7f3e3d66ad55586e18277a1158" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.557477 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:23 crc kubenswrapper[4829]: E0217 16:20:23.558030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.805611 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:23 crc kubenswrapper[4829]: I0217 16:20:23.952077 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.101093 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.125867 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.182460 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") pod \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.182696 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") pod \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\" (UID: \"17cc49ce-4e47-470a-ad6b-a4127308a7e4\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.183339 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17cc49ce-4e47-470a-ad6b-a4127308a7e4" (UID: "17cc49ce-4e47-470a-ad6b-a4127308a7e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.183809 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17cc49ce-4e47-470a-ad6b-a4127308a7e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.188757 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k" (OuterVolumeSpecName: "kube-api-access-ssj5k") pod "17cc49ce-4e47-470a-ad6b-a4127308a7e4" (UID: "17cc49ce-4e47-470a-ad6b-a4127308a7e4"). InnerVolumeSpecName "kube-api-access-ssj5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.286355 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") pod \"38fcc02f-9122-4ea6-bb0e-ef135805c127\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.286606 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") pod \"38fcc02f-9122-4ea6-bb0e-ef135805c127\" (UID: \"38fcc02f-9122-4ea6-bb0e-ef135805c127\") " Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.287459 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssj5k\" (UniqueName: \"kubernetes.io/projected/17cc49ce-4e47-470a-ad6b-a4127308a7e4-kube-api-access-ssj5k\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.290752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38fcc02f-9122-4ea6-bb0e-ef135805c127" (UID: "38fcc02f-9122-4ea6-bb0e-ef135805c127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.290947 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp" (OuterVolumeSpecName: "kube-api-access-mvwtp") pod "38fcc02f-9122-4ea6-bb0e-ef135805c127" (UID: "38fcc02f-9122-4ea6-bb0e-ef135805c127"). InnerVolumeSpecName "kube-api-access-mvwtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.313258 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14067e2a-e82f-44fb-a2df-5b2627647d4c" path="/var/lib/kubelet/pods/14067e2a-e82f-44fb-a2df-5b2627647d4c/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.314370 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288faaff-8af6-4b89-aa56-5789d3b28b37" path="/var/lib/kubelet/pods/288faaff-8af6-4b89-aa56-5789d3b28b37/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.315133 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb" path="/var/lib/kubelet/pods/a3ec8820-05b9-4a3f-bcb0-e842c5cd79eb/volumes" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.392489 4829 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fcc02f-9122-4ea6-bb0e-ef135805c127-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.392557 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwtp\" (UniqueName: \"kubernetes.io/projected/38fcc02f-9122-4ea6-bb0e-ef135805c127-kube-api-access-mvwtp\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.590914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cbfe-account-create-update-bfbsk" event={"ID":"17cc49ce-4e47-470a-ad6b-a4127308a7e4","Type":"ContainerDied","Data":"273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.590987 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273137a5398f128fdc08a67365dabfc75941f8c796dc4bafb4490492d2ff9df2" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.591089 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cbfe-account-create-update-bfbsk" Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.592940 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.594564 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.599942 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:20:24 crc kubenswrapper[4829]: E0217 16:20:24.600079 4829 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601706 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601752 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.601765 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerStarted","Data":"7f678395f28b403dc65226210aa2f82c7e9fac520b66b5fae571b8af46a56688"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.606926 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.606986 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"3f1143368869422a684a872f85799e4eab53674e7f6171e067b82963a2f8f099"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.609250 4829 generic.go:334] "Generic (PLEG): container finished" podID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerID="035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83" exitCode=0 Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.609326 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerDied","Data":"035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.614973 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zxj99" event={"ID":"38fcc02f-9122-4ea6-bb0e-ef135805c127","Type":"ContainerDied","Data":"2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf"} Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.615010 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zxj99" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.615012 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d223b08a64b7449ad1b0408889a63647597fa6c544b36280cd111086ebe78cf" Feb 17 16:20:24 crc kubenswrapper[4829]: I0217 16:20:24.644256 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.644229528 podStartE2EDuration="2.644229528s" podCreationTimestamp="2026-02-17 16:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:24.627753892 +0000 UTC m=+1537.044771950" watchObservedRunningTime="2026-02-17 16:20:24.644229528 +0000 UTC m=+1537.061247516" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.630080 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.702755 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:25 crc kubenswrapper[4829]: E0217 16:20:25.703290 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703303 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: E0217 16:20:25.703323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703328 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703586 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" containerName="mariadb-account-create-update" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.703617 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" containerName="mariadb-database-create" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.704423 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708072 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708416 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.708426 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.709280 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.747317 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.845909 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.845988 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.846079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.846119 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.951914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952016 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952046 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.952068 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959203 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959534 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.959533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:25 crc kubenswrapper[4829]: I0217 16:20:25.979228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"aodh-db-sync-89gpt\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.070616 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.099048 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159696 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159804 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159844 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.159868 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") pod \"264a77a9-afad-42ac-ac8f-7d705e242db5\" (UID: \"264a77a9-afad-42ac-ac8f-7d705e242db5\") " Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.167986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v" (OuterVolumeSpecName: "kube-api-access-zrc6v") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "kube-api-access-zrc6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.212419 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts" (OuterVolumeSpecName: "scripts") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.240012 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data" (OuterVolumeSpecName: "config-data") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.244787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "264a77a9-afad-42ac-ac8f-7d705e242db5" (UID: "264a77a9-afad-42ac-ac8f-7d705e242db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262806 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262838 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262849 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrc6v\" (UniqueName: \"kubernetes.io/projected/264a77a9-afad-42ac-ac8f-7d705e242db5-kube-api-access-zrc6v\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.262859 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264a77a9-afad-42ac-ac8f-7d705e242db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.643621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" event={"ID":"264a77a9-afad-42ac-ac8f-7d705e242db5","Type":"ContainerDied","Data":"428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e"} Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646366 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428842d0286179227ed247dc24b54c6c89a853443278784e982ab08cd471963e" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.646416 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbhtp" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.725178 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:26 crc kubenswrapper[4829]: E0217 16:20:26.726184 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.726203 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.726426 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" containerName="nova-cell1-conductor-db-sync" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.727366 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.729972 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.748621 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.763630 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.775309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.775839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.776053 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.878861 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.878979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.879083 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.884149 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.884200 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe67602-ae51-43a0-b450-af654c573d9a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:26 crc kubenswrapper[4829]: I0217 16:20:26.903743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67q7\" (UniqueName: \"kubernetes.io/projected/abe67602-ae51-43a0-b450-af654c573d9a-kube-api-access-v67q7\") pod \"nova-cell1-conductor-0\" (UID: \"abe67602-ae51-43a0-b450-af654c573d9a\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.121262 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.446651 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.493962 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.494412 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.494453 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") pod \"fcc83a9a-ecb1-46dd-be33-145b81792b63\" (UID: \"fcc83a9a-ecb1-46dd-be33-145b81792b63\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.528019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx" (OuterVolumeSpecName: "kube-api-access-gvzjx") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "kube-api-access-gvzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.568780 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data" (OuterVolumeSpecName: "config-data") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.593742 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc83a9a-ecb1-46dd-be33-145b81792b63" (UID: "fcc83a9a-ecb1-46dd-be33-145b81792b63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602156 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602202 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc83a9a-ecb1-46dd-be33-145b81792b63-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.602216 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvzjx\" (UniqueName: \"kubernetes.io/projected/fcc83a9a-ecb1-46dd-be33-145b81792b63-kube-api-access-gvzjx\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.635883 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703228 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703268 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.703360 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") pod \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\" (UID: \"f6e04e6e-a14a-40dc-8938-14c25fe5b775\") " Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.714784 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs" (OuterVolumeSpecName: "logs") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722035 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" exitCode=0 Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722097 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722121 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f6e04e6e-a14a-40dc-8938-14c25fe5b775","Type":"ContainerDied","Data":"21037a41552d2f17b0298eab9cadbade38ca54aa96f604942f870e2e7cef5930"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722136 4829 scope.go:117] "RemoveContainer" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.722274 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.723914 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj" (OuterVolumeSpecName: "kube-api-access-lf9zj") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "kube-api-access-lf9zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.752258 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.765134 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerStarted","Data":"a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.766797 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data" (OuterVolumeSpecName: "config-data") pod "f6e04e6e-a14a-40dc-8938-14c25fe5b775" (UID: "f6e04e6e-a14a-40dc-8938-14c25fe5b775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771795 4829 generic.go:334] "Generic (PLEG): container finished" podID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" exitCode=0 Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771844 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerDied","Data":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771875 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fcc83a9a-ecb1-46dd-be33-145b81792b63","Type":"ContainerDied","Data":"0a6baf72f36f68b63d71c5c1e9e99dced488541d38aaf0d4ecd5c3f870c08fd3"} Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.771954 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.805766 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806176 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e04e6e-a14a-40dc-8938-14c25fe5b775-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806189 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf9zj\" (UniqueName: \"kubernetes.io/projected/f6e04e6e-a14a-40dc-8938-14c25fe5b775-kube-api-access-lf9zj\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.806198 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e04e6e-a14a-40dc-8938-14c25fe5b775-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.811297 4829 scope.go:117] "RemoveContainer" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.842985 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.870028 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882086 4829 scope.go:117] "RemoveContainer" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882231 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.882891 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.882955 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.882965 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.883006 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883016 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883314 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-api" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883343 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" containerName="nova-scheduler-scheduler" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.883371 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" containerName="nova-api-log" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.884456 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.887234 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.887554 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.892124 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": container with ID starting with 15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4 not found: ID does not exist" containerID="15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.892189 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4"} err="failed to get container status \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": rpc error: code = NotFound desc = could not find container \"15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4\": container with ID starting with 15be2f02ce7824d6d7d46afb5fd19ed29a85c6c0c90fae89d1134d22d7a0c8d4 not found: ID does not exist" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.892252 4829 scope.go:117] "RemoveContainer" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.898872 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": container with ID starting with bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0 not found: ID does not exist" containerID="bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.898918 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0"} err="failed to get container status \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": rpc error: code = NotFound desc = could not find container \"bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0\": container with ID starting with bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0 not found: ID does not exist" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.898946 4829 scope.go:117] "RemoveContainer" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910308 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910398 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.910495 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.935801 4829 scope.go:117] "RemoveContainer" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: E0217 16:20:27.936481 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": container with ID starting with 370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c not found: ID does not exist" containerID="370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c" Feb 17 16:20:27 crc kubenswrapper[4829]: I0217 16:20:27.936531 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c"} err="failed to get container status \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": rpc error: code = NotFound desc = could not find container \"370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c\": container with ID starting with 370463039bb98d2890a666d0cf45ee6b02bc6f70e3995b1fc8807b90f48ce57c not found: ID does not exist" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012625 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012787 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.012914 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.019027 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.019073 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.041106 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"nova-scheduler-0\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.078313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.103063 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.131742 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.147107 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.148965 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.151423 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.161613 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.220752 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232630 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232853 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.232943 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334745 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334873 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334898 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.334937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.335456 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.352407 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.355291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.356050 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"nova-api-0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.360272 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e04e6e-a14a-40dc-8938-14c25fe5b775" path="/var/lib/kubelet/pods/f6e04e6e-a14a-40dc-8938-14c25fe5b775/volumes" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.361219 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc83a9a-ecb1-46dd-be33-145b81792b63" path="/var/lib/kubelet/pods/fcc83a9a-ecb1-46dd-be33-145b81792b63/volumes" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.362287 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.362391 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.424622 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:28 crc kubenswrapper[4829]: I0217 16:20:28.805179 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"abe67602-ae51-43a0-b450-af654c573d9a","Type":"ContainerStarted","Data":"3077716e588c41a44507c07c5de41c5d7d6babfb3e348a3cb7fef8e4bbd70e1a"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.142798 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.231971 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.822410 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"abe67602-ae51-43a0-b450-af654c573d9a","Type":"ContainerStarted","Data":"51b71a30ce15c56c718cb73e47a02d807264cff0e06a64a34ed6fc7686b8e02a"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.822691 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.828038 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerStarted","Data":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.829043 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.830352 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerStarted","Data":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.830374 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerStarted","Data":"c52c06fac7bbd9c26185cdf4701a182bdfd4bd0e4897e4f1d991aa5849c43671"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.831917 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.831940 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"622a936aec57e0c945ae7671635046510015465545d885452898518495289721"} Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.852493 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.8524567530000002 podStartE2EDuration="3.852456753s" podCreationTimestamp="2026-02-17 16:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:29.838902066 +0000 UTC m=+1542.255920044" watchObservedRunningTime="2026-02-17 16:20:29.852456753 +0000 UTC m=+1542.269474761" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.881312 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.881294804 podStartE2EDuration="2.881294804s" podCreationTimestamp="2026-02-17 16:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:29.854047586 +0000 UTC m=+1542.271065564" watchObservedRunningTime="2026-02-17 16:20:29.881294804 +0000 UTC m=+1542.298312782" Feb 17 16:20:29 crc kubenswrapper[4829]: I0217 16:20:29.902486 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.106170176 podStartE2EDuration="7.902466897s" podCreationTimestamp="2026-02-17 16:20:22 +0000 UTC" firstStartedPulling="2026-02-17 16:20:23.81111725 +0000 UTC m=+1536.228135228" lastFinishedPulling="2026-02-17 16:20:28.607413971 +0000 UTC m=+1541.024431949" observedRunningTime="2026-02-17 16:20:29.878061156 +0000 UTC m=+1542.295079134" watchObservedRunningTime="2026-02-17 16:20:29.902466897 +0000 UTC m=+1542.319484875" Feb 17 16:20:30 crc kubenswrapper[4829]: I0217 16:20:30.843182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerStarted","Data":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} Feb 17 16:20:30 crc kubenswrapper[4829]: I0217 16:20:30.873888 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.873875061 podStartE2EDuration="2.873875061s" podCreationTimestamp="2026-02-17 16:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:30.863271883 +0000 UTC m=+1543.280289861" watchObservedRunningTime="2026-02-17 16:20:30.873875061 +0000 UTC m=+1543.290893039" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.222520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.316884 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:20:33 crc kubenswrapper[4829]: I0217 16:20:33.316947 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.332759 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.332781 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.928415 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerStarted","Data":"42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354"} Feb 17 16:20:34 crc kubenswrapper[4829]: I0217 16:20:34.962498 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-89gpt" podStartSLOduration=2.301000037 podStartE2EDuration="9.962474809s" podCreationTimestamp="2026-02-17 16:20:25 +0000 UTC" firstStartedPulling="2026-02-17 16:20:26.771330294 +0000 UTC m=+1539.188348272" lastFinishedPulling="2026-02-17 16:20:34.432805066 +0000 UTC m=+1546.849823044" observedRunningTime="2026-02-17 16:20:34.948916121 +0000 UTC m=+1547.365934109" watchObservedRunningTime="2026-02-17 16:20:34.962474809 +0000 UTC m=+1547.379492787" Feb 17 16:20:36 crc kubenswrapper[4829]: I0217 16:20:36.279619 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:36 crc kubenswrapper[4829]: E0217 16:20:36.280251 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.165608 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.975012 4829 generic.go:334] "Generic (PLEG): container finished" podID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerID="42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354" exitCode=0 Feb 17 16:20:37 crc kubenswrapper[4829]: I0217 16:20:37.975089 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerDied","Data":"42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354"} Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.222136 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.277037 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.426175 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:38 crc kubenswrapper[4829]: I0217 16:20:38.426235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.077462 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.490841 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.509865 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.510250 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.249:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552083 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552171 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552204 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.552710 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") pod \"c89e689f-68fd-4357-a2a0-1d4b8d130702\" (UID: \"c89e689f-68fd-4357-a2a0-1d4b8d130702\") " Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.560548 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h" (OuterVolumeSpecName: "kube-api-access-wj88h") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "kube-api-access-wj88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.564216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts" (OuterVolumeSpecName: "scripts") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.590823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data" (OuterVolumeSpecName: "config-data") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.593630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c89e689f-68fd-4357-a2a0-1d4b8d130702" (UID: "c89e689f-68fd-4357-a2a0-1d4b8d130702"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655484 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655530 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655543 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e689f-68fd-4357-a2a0-1d4b8d130702-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:39 crc kubenswrapper[4829]: I0217 16:20:39.655557 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj88h\" (UniqueName: \"kubernetes.io/projected/c89e689f-68fd-4357-a2a0-1d4b8d130702-kube-api-access-wj88h\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.034860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-89gpt" event={"ID":"c89e689f-68fd-4357-a2a0-1d4b8d130702","Type":"ContainerDied","Data":"a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64"} Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.035218 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67851b58fdca35e45692a75dfaad303a2ed17c8fb928d9306138cb630acef64" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.034907 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-89gpt" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.835430 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:40 crc kubenswrapper[4829]: E0217 16:20:40.835989 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.836003 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.836201 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" containerName="aodh-db-sync" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.838308 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.846970 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.847164 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.847225 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.851879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988192 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988259 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:40 crc kubenswrapper[4829]: I0217 16:20:40.988815 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091283 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.091474 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.096491 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.099526 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.111017 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.111415 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"aodh-0\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.171029 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:20:41 crc kubenswrapper[4829]: I0217 16:20:41.721467 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:42 crc kubenswrapper[4829]: I0217 16:20:42.057420 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"c4afff1a2ba6d2a5ca1bb51c6475f556a5d2736c3b4ec308f87e7a0a06dccc60"} Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.078830 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.322357 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.333489 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.339231 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.394806 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395121 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" containerID="cri-o://b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395259 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" containerID="cri-o://89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395302 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" containerID="cri-o://9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.395335 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" containerID="cri-o://e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" gracePeriod=30 Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.405388 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.244:3000/\": EOF" Feb 17 16:20:43 crc kubenswrapper[4829]: I0217 16:20:43.734479 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.097860 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" exitCode=0 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098479 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" exitCode=2 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098543 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" exitCode=0 Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.097939 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.098693 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} Feb 17 16:20:44 crc kubenswrapper[4829]: I0217 16:20:44.112652 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:20:45 crc kubenswrapper[4829]: I0217 16:20:45.112915 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.153208 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fcc02f_9122_4ea6_bb0e_ef135805c127.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fcc02f_9122_4ea6_bb0e_ef135805c127.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.153781 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cc49ce_4e47_470a_ad6b_a4127308a7e4.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17cc49ce_4e47_470a_ad6b_a4127308a7e4.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.172256 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288faaff_8af6_4b89_aa56_5789d3b28b37.slice/crio-9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f.scope WatchSource:0}: Error finding container 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f: Status 404 returned error can't find the container with id 9369212132b9ef18cef30d28e427c779f00aa129485a6a79475ee927a354f56f Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.172505 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288faaff_8af6_4b89_aa56_5789d3b28b37.slice/crio-960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a.scope WatchSource:0}: Error finding container 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a: Status 404 returned error can't find the container with id 960b67520845ec5be4ad32e65a5ff8766d10a9ec2fd5f6cda1a4346c45d7b85a Feb 17 16:20:45 crc kubenswrapper[4829]: W0217 16:20:45.178216 4829 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc89e689f_68fd_4357_a2a0_1d4b8d130702.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc89e689f_68fd_4357_a2a0_1d4b8d130702.slice: no such file or directory Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.277049 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.277601 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda05ad89_4eff_401a_9006_935800aab7d9.slice/crio-7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:45 crc kubenswrapper[4829]: E0217 16:20:45.284564 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-conmon-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb42864_7e0c_40a9_a14a_5f4155ed0e94.slice/crio-e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-0bb48debe1ed5a7e44fbba9fcb87f98d2aeac9b9fceafe390613ede2ce1927ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-d996c658b152cd8f67300adf60559ad2a4ed286cd139b6ee9ade25d08e5b74ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice/crio-conmon-7d1f8d42f80ce714e146ac95138cb554e66e1aad797635934282aaba828ce2bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda05ad89_4eff_401a_9006_935800aab7d9.slice/crio-7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08208ef6_e99c_4f83_952c_5828df9b7af8.slice/crio-conmon-28db9e1bb1612222293186158e2500a2025654aa7aa2f2ab362de9a2d87f77a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ec8820_05b9_4a3f_bcb0_e842c5cd79eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-2deff779eb69efe8f94454d55d7309e1519a6df83136dbdf65ded8ba890ecac9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef56b6a_4a1c_4305_a88d_3654df130c52.slice/crio-7bfae7f6a720d5cf7c9479243e279717576f2b3711182c5b442a53cb51e1e93f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14067e2a_e82f_44fb_a2df_5b2627647d4c.slice/crio-conmon-314a253e181cda321d37f8b25cf655be2cd6b88547dc5796781d0e62f40d351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e04e6e_a14a_40dc_8938_14c25fe5b775.slice/crio-conmon-bc3e91b394dd3e665473103380b1d6924dfceb0a73a11e0f34c596ee58bc4df0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.075597 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.134862 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136573 4829 generic.go:334] "Generic (PLEG): container finished" podID="da05ad89-4eff-401a-9006-935800aab7d9" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" exitCode=137 Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136700 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136715 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerDied","Data":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136776 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da05ad89-4eff-401a-9006-935800aab7d9","Type":"ContainerDied","Data":"571dce0f3dca1580b88fc77df97f1e4a84daf42acff7755a8cd9c913181ac9b2"} Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.136800 4829 scope.go:117] "RemoveContainer" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.160880 4829 scope.go:117] "RemoveContainer" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: E0217 16:20:46.161334 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": container with ID starting with 7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd not found: ID does not exist" containerID="7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.161379 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd"} err="failed to get container status \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": rpc error: code = NotFound desc = could not find container \"7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd\": container with ID starting with 7f4fbe75c72828101b5d861f9373f1913365783bc2aa473e3d351291d09703cd not found: ID does not exist" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208410 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208663 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.208756 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") pod \"da05ad89-4eff-401a-9006-935800aab7d9\" (UID: \"da05ad89-4eff-401a-9006-935800aab7d9\") " Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.215611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5" (OuterVolumeSpecName: "kube-api-access-bs5q5") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "kube-api-access-bs5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.247472 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data" (OuterVolumeSpecName: "config-data") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.253222 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da05ad89-4eff-401a-9006-935800aab7d9" (UID: "da05ad89-4eff-401a-9006-935800aab7d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311243 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311272 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs5q5\" (UniqueName: \"kubernetes.io/projected/da05ad89-4eff-401a-9006-935800aab7d9-kube-api-access-bs5q5\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.311281 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da05ad89-4eff-401a-9006-935800aab7d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.463873 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.475820 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.490455 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: E0217 16:20:46.491018 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.491036 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.491255 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="da05ad89-4eff-401a-9006-935800aab7d9" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.492077 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.494020 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.495725 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.498862 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.507986 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620672 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620717 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.620956 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.621195 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723904 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723959 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.723990 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.724021 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.724106 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.731044 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.733420 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.734172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.745965 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.746510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nsq\" (UniqueName: \"kubernetes.io/projected/fa5f0bda-7dee-4ea8-9b6c-ec30ce341044-kube-api-access-z7nsq\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:46 crc kubenswrapper[4829]: I0217 16:20:46.809791 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:47 crc kubenswrapper[4829]: I0217 16:20:47.406092 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.019498 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171263 4829 generic.go:334] "Generic (PLEG): container finished" podID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" exitCode=0 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171402 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171433 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bda35ab-f2ff-46ac-8733-76b7df307990","Type":"ContainerDied","Data":"3f1143368869422a684a872f85799e4eab53674e7f6171e067b82963a2f8f099"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171450 4829 scope.go:117] "RemoveContainer" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.171602 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.176712 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044","Type":"ContainerStarted","Data":"dcfb31c558debe06e87a6975cd538adbc1f28025b77622dd134a53ec2f462af8"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.176986 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa5f0bda-7dee-4ea8-9b6c-ec30ce341044","Type":"ContainerStarted","Data":"785e9ac7c74b47df9879880dd011fc9def07c1669535efc483de5a1372e3fc5e"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerStarted","Data":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180182 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" containerID="cri-o://41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180717 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" containerID="cri-o://eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180778 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" containerID="cri-o://0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.180841 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" containerID="cri-o://25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" gracePeriod=30 Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211347 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211659 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211794 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211865 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.211991 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212110 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212282 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") pod \"0bda35ab-f2ff-46ac-8733-76b7df307990\" (UID: \"0bda35ab-f2ff-46ac-8733-76b7df307990\") " Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.212109 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.214013 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.215098 4829 scope.go:117] "RemoveContainer" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.216178 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.220214 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7" (OuterVolumeSpecName: "kube-api-access-6hkr7") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "kube-api-access-6hkr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.225119 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts" (OuterVolumeSpecName: "scripts") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.231407 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.231386904 podStartE2EDuration="2.231386904s" podCreationTimestamp="2026-02-17 16:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:48.193900435 +0000 UTC m=+1560.610918413" watchObservedRunningTime="2026-02-17 16:20:48.231386904 +0000 UTC m=+1560.648404882" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.237526 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.501004793 podStartE2EDuration="8.237510039s" podCreationTimestamp="2026-02-17 16:20:40 +0000 UTC" firstStartedPulling="2026-02-17 16:20:41.723238491 +0000 UTC m=+1554.140256469" lastFinishedPulling="2026-02-17 16:20:47.459743737 +0000 UTC m=+1559.876761715" observedRunningTime="2026-02-17 16:20:48.212220428 +0000 UTC m=+1560.629238406" watchObservedRunningTime="2026-02-17 16:20:48.237510039 +0000 UTC m=+1560.654528017" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.294943 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.295245 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.309689 4829 scope.go:117] "RemoveContainer" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.317765 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da05ad89-4eff-401a-9006-935800aab7d9" path="/var/lib/kubelet/pods/da05ad89-4eff-401a-9006-935800aab7d9/volumes" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.317883 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bda35ab-f2ff-46ac-8733-76b7df307990-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.318875 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.318957 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hkr7\" (UniqueName: \"kubernetes.io/projected/0bda35ab-f2ff-46ac-8733-76b7df307990-kube-api-access-6hkr7\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.332833 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.358790 4829 scope.go:117] "RemoveContainer" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.362510 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.379304 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data" (OuterVolumeSpecName: "config-data") pod "0bda35ab-f2ff-46ac-8733-76b7df307990" (UID: "0bda35ab-f2ff-46ac-8733-76b7df307990"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.388550 4829 scope.go:117] "RemoveContainer" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389017 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": container with ID starting with 89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4 not found: ID does not exist" containerID="89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389055 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4"} err="failed to get container status \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": rpc error: code = NotFound desc = could not find container \"89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4\": container with ID starting with 89106c9a5044c70a8064977de46c9c048eca3bb85e0322db1cc4e9b878289cc4 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389079 4829 scope.go:117] "RemoveContainer" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389369 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": container with ID starting with 9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9 not found: ID does not exist" containerID="9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389394 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9"} err="failed to get container status \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": rpc error: code = NotFound desc = could not find container \"9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9\": container with ID starting with 9715f680f3b7d6a97193c8632e2dfe1cbfc8c013671b47dca4a98028bb9c87a9 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389415 4829 scope.go:117] "RemoveContainer" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.389705 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": container with ID starting with e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238 not found: ID does not exist" containerID="e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389731 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238"} err="failed to get container status \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": rpc error: code = NotFound desc = could not find container \"e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238\": container with ID starting with e662c6fb11c175eb5fd940b2f66c5782bc38249f78970480f834c166608d9238 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.389747 4829 scope.go:117] "RemoveContainer" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.390009 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": container with ID starting with b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75 not found: ID does not exist" containerID="b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.390034 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75"} err="failed to get container status \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": rpc error: code = NotFound desc = could not find container \"b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75\": container with ID starting with b8df706b2ef1b1c3fee7c4d356193f0e71c923a3194d3093a89592efab699c75 not found: ID does not exist" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424500 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424531 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.424542 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bda35ab-f2ff-46ac-8733-76b7df307990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.429959 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.430508 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.431649 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.434399 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.512184 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.532626 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.554150 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555026 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555049 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555079 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555089 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555117 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555126 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: E0217 16:20:48.555147 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555155 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555458 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-central-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555489 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="proxy-httpd" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555510 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="ceilometer-notification-agent" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.555538 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" containerName="sg-core" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.558657 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.566606 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.575857 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.576092 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633076 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633166 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633202 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633330 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633419 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.633648 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735761 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.735902 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736291 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736459 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736542 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.736637 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.737104 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.737357 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.740558 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.740627 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.741866 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.743035 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:48 crc kubenswrapper[4829]: I0217 16:20:48.763297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"ceilometer-0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " pod="openstack/ceilometer-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.012881 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.232951 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233203 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233212 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" exitCode=0 Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.233056 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.234232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.234246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.235162 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.238731 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.437697 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.454929 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.502135 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582004 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582299 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582391 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.582555 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.673649 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684744 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684826 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684892 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.684997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.685028 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.685859 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.695910 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699329 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699717 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.699729 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.711768 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"dnsmasq-dns-6d99f6bc7f-cq899\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:49 crc kubenswrapper[4829]: I0217 16:20:49.782058 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.249200 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"4f38d7e9c21e5a5bb4aa4283aef17c56de184252a9a841ed16ca27e145f9895d"} Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.293510 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bda35ab-f2ff-46ac-8733-76b7df307990" path="/var/lib/kubelet/pods/0bda35ab-f2ff-46ac-8733-76b7df307990/volumes" Feb 17 16:20:50 crc kubenswrapper[4829]: I0217 16:20:50.295129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:20:50 crc kubenswrapper[4829]: W0217 16:20:50.296628 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fdb8e01_6d92_47be_a6a8_4d2e39d42152.slice/crio-9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee WatchSource:0}: Error finding container 9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee: Status 404 returned error can't find the container with id 9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.259721 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262670 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerID="d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68" exitCode=0 Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262766 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.262810 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerStarted","Data":"9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee"} Feb 17 16:20:51 crc kubenswrapper[4829]: I0217 16:20:51.811327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.099477 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.274714 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerStarted","Data":"5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.274798 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276213 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276329 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" containerID="cri-o://a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" gracePeriod=30 Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.276382 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" containerID="cri-o://3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" gracePeriod=30 Feb 17 16:20:52 crc kubenswrapper[4829]: I0217 16:20:52.331892 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" podStartSLOduration=3.33187514 podStartE2EDuration="3.33187514s" podCreationTimestamp="2026-02-17 16:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:52.296628531 +0000 UTC m=+1564.713646509" watchObservedRunningTime="2026-02-17 16:20:52.33187514 +0000 UTC m=+1564.748893118" Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.291530 4829 generic.go:334] "Generic (PLEG): container finished" podID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" exitCode=143 Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.293061 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} Feb 17 16:20:53 crc kubenswrapper[4829]: E0217 16:20:53.519572 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:53 crc kubenswrapper[4829]: I0217 16:20:53.688975 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.304764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerStarted","Data":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.305006 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:20:54 crc kubenswrapper[4829]: I0217 16:20:54.340640 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.880608074 podStartE2EDuration="6.340611236s" podCreationTimestamp="2026-02-17 16:20:48 +0000 UTC" firstStartedPulling="2026-02-17 16:20:49.705673199 +0000 UTC m=+1562.122691177" lastFinishedPulling="2026-02-17 16:20:53.165676351 +0000 UTC m=+1565.582694339" observedRunningTime="2026-02-17 16:20:54.327786631 +0000 UTC m=+1566.744804609" watchObservedRunningTime="2026-02-17 16:20:54.340611236 +0000 UTC m=+1566.757629234" Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315068 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" containerID="cri-o://432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315098 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" containerID="cri-o://71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315114 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" containerID="cri-o://954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" gracePeriod=30 Feb 17 16:20:55 crc kubenswrapper[4829]: I0217 16:20:55.315158 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" containerID="cri-o://3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" gracePeriod=30 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.061844 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226457 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226508 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.226631 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") pod \"29ec0e6f-a70b-414f-880d-59dec9878ff0\" (UID: \"29ec0e6f-a70b-414f-880d-59dec9878ff0\") " Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.227476 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs" (OuterVolumeSpecName: "logs") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.234915 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z" (OuterVolumeSpecName: "kube-api-access-42x7z") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "kube-api-access-42x7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.291251 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data" (OuterVolumeSpecName: "config-data") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.307739 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ec0e6f-a70b-414f-880d-59dec9878ff0" (UID: "29ec0e6f-a70b-414f-880d-59dec9878ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329510 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329545 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42x7z\" (UniqueName: \"kubernetes.io/projected/29ec0e6f-a70b-414f-880d-59dec9878ff0-kube-api-access-42x7z\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329556 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ec0e6f-a70b-414f-880d-59dec9878ff0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.329564 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ec0e6f-a70b-414f-880d-59dec9878ff0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337377 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337404 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" exitCode=2 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337412 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337460 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337486 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.337498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346668 4829 generic.go:334] "Generic (PLEG): container finished" podID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" exitCode=0 Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346723 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346757 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29ec0e6f-a70b-414f-880d-59dec9878ff0","Type":"ContainerDied","Data":"622a936aec57e0c945ae7671635046510015465545d885452898518495289721"} Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346777 4829 scope.go:117] "RemoveContainer" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.346782 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.375671 4829 scope.go:117] "RemoveContainer" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.399469 4829 scope.go:117] "RemoveContainer" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.399680 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.400018 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": container with ID starting with 3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2 not found: ID does not exist" containerID="3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400057 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2"} err="failed to get container status \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": rpc error: code = NotFound desc = could not find container \"3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2\": container with ID starting with 3fd7b2c1806b018948f7d2e2a5eda577c3babf1c2737c1e01a085255c7e58cc2 not found: ID does not exist" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400080 4829 scope.go:117] "RemoveContainer" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.400399 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": container with ID starting with a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5 not found: ID does not exist" containerID="a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.400421 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5"} err="failed to get container status \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": rpc error: code = NotFound desc = could not find container \"a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5\": container with ID starting with a1aa8942a6b800aed28ee018b3fe3760d59f5016f18778978286a5889c4b0dc5 not found: ID does not exist" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.420670 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438282 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.438881 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438898 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: E0217 16:20:56.438918 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.438925 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.439158 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-api" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.439194 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" containerName="nova-api-log" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.440524 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444413 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444644 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.444786 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.453504 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635374 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.635780 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.636809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738683 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738739 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738829 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738867 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.738925 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.739233 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743137 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743178 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.743599 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.757469 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.757978 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"nova-api-0\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.801449 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.811308 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:56 crc kubenswrapper[4829]: I0217 16:20:56.847950 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.319452 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:20:57 crc kubenswrapper[4829]: W0217 16:20:57.326816 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae839887_6e18_4062_bf65_95cef31fdd49.slice/crio-6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6 WatchSource:0}: Error finding container 6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6: Status 404 returned error can't find the container with id 6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6 Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.410599 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6"} Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.429876 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:20:57 crc kubenswrapper[4829]: E0217 16:20:57.542506 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.615341 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.617400 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.621382 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.622166 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.648469 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671336 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671409 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.671529 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.773555 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.773967 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.774095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.774172 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.783503 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.783533 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.792276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.792501 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"nova-cell1-cell-mapping-8dvtl\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:57 crc kubenswrapper[4829]: I0217 16:20:57.915952 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.190913 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.302914 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ec0e6f-a70b-414f-880d-59dec9878ff0" path="/var/lib/kubelet/pods/29ec0e6f-a70b-414f-880d-59dec9878ff0/volumes" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.390839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391265 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391335 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391375 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391403 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391493 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.391549 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") pod \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\" (UID: \"8527b72c-dacf-4126-9b7b-06a0294d6ac0\") " Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.394060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.395016 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.403069 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw" (OuterVolumeSpecName: "kube-api-access-v6kxw") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "kube-api-access-v6kxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.411072 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts" (OuterVolumeSpecName: "scripts") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.439774 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445841 4829 generic.go:334] "Generic (PLEG): container finished" podID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" exitCode=0 Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445895 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445922 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8527b72c-dacf-4126-9b7b-06a0294d6ac0","Type":"ContainerDied","Data":"4f38d7e9c21e5a5bb4aa4283aef17c56de184252a9a841ed16ca27e145f9895d"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.445938 4829 scope.go:117] "RemoveContainer" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.446071 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.464307 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.464844 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerStarted","Data":"20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa"} Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.493378 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.493359418 podStartE2EDuration="2.493359418s" podCreationTimestamp="2026-02-17 16:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:58.490759168 +0000 UTC m=+1570.907777146" watchObservedRunningTime="2026-02-17 16:20:58.493359418 +0000 UTC m=+1570.910377396" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495085 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495115 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8527b72c-dacf-4126-9b7b-06a0294d6ac0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495124 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6kxw\" (UniqueName: \"kubernetes.io/projected/8527b72c-dacf-4126-9b7b-06a0294d6ac0-kube-api-access-v6kxw\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495136 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.495144 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.540690 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.581743 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data" (OuterVolumeSpecName: "config-data") pod "8527b72c-dacf-4126-9b7b-06a0294d6ac0" (UID: "8527b72c-dacf-4126-9b7b-06a0294d6ac0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.599313 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.599348 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8527b72c-dacf-4126-9b7b-06a0294d6ac0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.698555 4829 scope.go:117] "RemoveContainer" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.731024 4829 scope.go:117] "RemoveContainer" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.755711 4829 scope.go:117] "RemoveContainer" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808265 4829 scope.go:117] "RemoveContainer" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.808652 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": container with ID starting with 3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7 not found: ID does not exist" containerID="3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808697 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7"} err="failed to get container status \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": rpc error: code = NotFound desc = could not find container \"3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7\": container with ID starting with 3391e6dc5b7bf3106caf4eb5656ff0ce5c96e91ead53d9cf78dce05ad18e8fe7 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808722 4829 scope.go:117] "RemoveContainer" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.808947 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": container with ID starting with 71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1 not found: ID does not exist" containerID="71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808982 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1"} err="failed to get container status \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": rpc error: code = NotFound desc = could not find container \"71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1\": container with ID starting with 71b54280dc2e41d0bec5a21c62618b15e5e6d3343c54bc202771570a33848fd1 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.808998 4829 scope.go:117] "RemoveContainer" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.809244 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": container with ID starting with 954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d not found: ID does not exist" containerID="954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809289 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d"} err="failed to get container status \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": rpc error: code = NotFound desc = could not find container \"954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d\": container with ID starting with 954d101f7448b9fb669217b776ad6cfb791051de0b4df92e2c7ab1525016533d not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809317 4829 scope.go:117] "RemoveContainer" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.809596 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": container with ID starting with 432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472 not found: ID does not exist" containerID="432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.809622 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472"} err="failed to get container status \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": rpc error: code = NotFound desc = could not find container \"432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472\": container with ID starting with 432785d8d494fd952fef3da6c9ca8e9523bec85cdbcb81f2b6482f286dea1472 not found: ID does not exist" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.817630 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.832701 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.852803 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853527 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853545 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853571 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853590 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853603 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853609 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: E0217 16:20:58.853635 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853863 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-notification-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853885 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="proxy-httpd" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853907 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="ceilometer-central-agent" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.853918 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" containerName="sg-core" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.856020 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.859773 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.859930 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.867103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:20:58 crc kubenswrapper[4829]: I0217 16:20:58.881242 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.008888 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.008965 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009110 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009444 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009524 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.009809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.112792 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113108 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113618 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113651 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.113946 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.118010 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.118278 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.121164 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.121342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.132931 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"ceilometer-0\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.233165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.490107 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerStarted","Data":"162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4"} Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.490799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerStarted","Data":"512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7"} Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.509749 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8dvtl" podStartSLOduration=2.509732204 podStartE2EDuration="2.509732204s" podCreationTimestamp="2026-02-17 16:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:59.507520314 +0000 UTC m=+1571.924538292" watchObservedRunningTime="2026-02-17 16:20:59.509732204 +0000 UTC m=+1571.926750182" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.776110 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.785744 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.881662 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:20:59 crc kubenswrapper[4829]: I0217 16:20:59.882149 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" containerID="cri-o://09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" gracePeriod=10 Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.314228 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8527b72c-dacf-4126-9b7b-06a0294d6ac0" path="/var/lib/kubelet/pods/8527b72c-dacf-4126-9b7b-06a0294d6ac0/volumes" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.513860 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.513902 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"917e80d190c9f417c6d7ad24e1ab772a0f50f28f3fab4aadaa2a3c83b5714c95"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515616 4829 generic.go:334] "Generic (PLEG): container finished" podID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerID="09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" exitCode=0 Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515848 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515890 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" event={"ID":"52a2d626-5ff1-4f8c-80d1-3b90906b5a96","Type":"ContainerDied","Data":"28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185"} Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.515904 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28fb0e8376fe1b1dc8bc84fb866a4e66e94514394b94bf9702290a52cfbf3185" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.584929 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779629 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779670 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779816 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.779880 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.780017 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") pod \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\" (UID: \"52a2d626-5ff1-4f8c-80d1-3b90906b5a96\") " Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.792302 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl" (OuterVolumeSpecName: "kube-api-access-dmtxl") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "kube-api-access-dmtxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.850949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.851884 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config" (OuterVolumeSpecName: "config") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.858551 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883803 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883832 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883843 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtxl\" (UniqueName: \"kubernetes.io/projected/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-kube-api-access-dmtxl\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.883853 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.889013 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.912202 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52a2d626-5ff1-4f8c-80d1-3b90906b5a96" (UID: "52a2d626-5ff1-4f8c-80d1-3b90906b5a96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.985370 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:00 crc kubenswrapper[4829]: I0217 16:21:00.985409 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52a2d626-5ff1-4f8c-80d1-3b90906b5a96-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.528011 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-g5wqn" Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.528186 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.587829 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:21:01 crc kubenswrapper[4829]: I0217 16:21:01.598212 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-g5wqn"] Feb 17 16:21:02 crc kubenswrapper[4829]: I0217 16:21:02.291240 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" path="/var/lib/kubelet/pods/52a2d626-5ff1-4f8c-80d1-3b90906b5a96/volumes" Feb 17 16:21:02 crc kubenswrapper[4829]: I0217 16:21:02.551101 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} Feb 17 16:21:03 crc kubenswrapper[4829]: I0217 16:21:03.280489 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:03 crc kubenswrapper[4829]: E0217 16:21:03.280823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.579831 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerStarted","Data":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.580442 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.581973 4829 generic.go:334] "Generic (PLEG): container finished" podID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerID="162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4" exitCode=0 Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.582014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerDied","Data":"162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4"} Feb 17 16:21:04 crc kubenswrapper[4829]: I0217 16:21:04.610832 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4473001180000002 podStartE2EDuration="6.610811071s" podCreationTimestamp="2026-02-17 16:20:58 +0000 UTC" firstStartedPulling="2026-02-17 16:20:59.786039654 +0000 UTC m=+1572.203057632" lastFinishedPulling="2026-02-17 16:21:03.949550607 +0000 UTC m=+1576.366568585" observedRunningTime="2026-02-17 16:21:04.607097001 +0000 UTC m=+1577.024115019" watchObservedRunningTime="2026-02-17 16:21:04.610811071 +0000 UTC m=+1577.027829049" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.105213 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130611 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130689 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.130883 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.131074 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") pod \"85602fcf-2cee-4c92-8270-623eb79c4baa\" (UID: \"85602fcf-2cee-4c92-8270-623eb79c4baa\") " Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.139002 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts" (OuterVolumeSpecName: "scripts") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.159024 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv" (OuterVolumeSpecName: "kube-api-access-w4qcv") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "kube-api-access-w4qcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.201734 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.214172 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data" (OuterVolumeSpecName: "config-data") pod "85602fcf-2cee-4c92-8270-623eb79c4baa" (UID: "85602fcf-2cee-4c92-8270-623eb79c4baa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235734 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235767 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235776 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85602fcf-2cee-4c92-8270-623eb79c4baa-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.235785 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4qcv\" (UniqueName: \"kubernetes.io/projected/85602fcf-2cee-4c92-8270-623eb79c4baa-kube-api-access-w4qcv\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609049 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8dvtl" event={"ID":"85602fcf-2cee-4c92-8270-623eb79c4baa","Type":"ContainerDied","Data":"512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7"} Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609096 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512cf5344f542c1ccd5962b24db4b75d642cd086ff1e4cff570c8fa1d645e5e7" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.609175 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8dvtl" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.801896 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.803994 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.849782 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.891892 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.892226 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" containerID="cri-o://953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" gracePeriod=30 Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.892295 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" containerID="cri-o://027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" gracePeriod=30 Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.917239 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:06 crc kubenswrapper[4829]: I0217 16:21:06.917508 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" containerID="cri-o://2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" gracePeriod=30 Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.625189 4829 generic.go:334] "Generic (PLEG): container finished" podID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerID="953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" exitCode=143 Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.626222 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80"} Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.813786 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:07 crc kubenswrapper[4829]: I0217 16:21:07.814277 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:07 crc kubenswrapper[4829]: E0217 16:21:07.987178 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.223195 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.223928 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.224321 4829 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.224388 4829 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.251853 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.276059 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.381852 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.382033 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.382506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") pod \"0b803a04-fbc0-4844-aa4f-b8302c15024f\" (UID: \"0b803a04-fbc0-4844-aa4f-b8302c15024f\") " Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.389946 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb" (OuterVolumeSpecName: "kube-api-access-pwrqb") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "kube-api-access-pwrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.418037 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data" (OuterVolumeSpecName: "config-data") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.423503 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b803a04-fbc0-4844-aa4f-b8302c15024f" (UID: "0b803a04-fbc0-4844-aa4f-b8302c15024f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485439 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrqb\" (UniqueName: \"kubernetes.io/projected/0b803a04-fbc0-4844-aa4f-b8302c15024f-kube-api-access-pwrqb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485587 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.485998 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b803a04-fbc0-4844-aa4f-b8302c15024f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644327 4829 generic.go:334] "Generic (PLEG): container finished" podID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" exitCode=0 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644388 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerDied","Data":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644421 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b803a04-fbc0-4844-aa4f-b8302c15024f","Type":"ContainerDied","Data":"c52c06fac7bbd9c26185cdf4701a182bdfd4bd0e4897e4f1d991aa5849c43671"} Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644479 4829 scope.go:117] "RemoveContainer" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644809 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" containerID="cri-o://20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" gracePeriod=30 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.644907 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" containerID="cri-o://717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" gracePeriod=30 Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.689724 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.713492 4829 scope.go:117] "RemoveContainer" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.713915 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": container with ID starting with 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 not found: ID does not exist" containerID="2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.713983 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94"} err="failed to get container status \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": rpc error: code = NotFound desc = could not find container \"2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94\": container with ID starting with 2a86b0d078b3ee74aa0c78d89b7acbcb370ee456439cc04a5629814056472a94 not found: ID does not exist" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.741227 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.760377 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764456 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764484 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764555 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764564 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764597 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764605 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: E0217 16:21:08.764618 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="init" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.764625 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="init" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765237 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a2d626-5ff1-4f8c-80d1-3b90906b5a96" containerName="dnsmasq-dns" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765262 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" containerName="nova-scheduler-scheduler" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.765310 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" containerName="nova-manage" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.766268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.770142 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.779103 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.894953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.895257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.895481 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997125 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:08 crc kubenswrapper[4829]: I0217 16:21:08.997813 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.003074 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-config-data\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.004313 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d63bbb-2d26-4b85-8241-2785a5194a21-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.021192 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pc4\" (UniqueName: \"kubernetes.io/projected/37d63bbb-2d26-4b85-8241-2785a5194a21-kube-api-access-f4pc4\") pod \"nova-scheduler-0\" (UID: \"37d63bbb-2d26-4b85-8241-2785a5194a21\") " pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.133789 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.666396 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.668826 4829 generic.go:334] "Generic (PLEG): container finished" podID="ae839887-6e18-4062-bf65-95cef31fdd49" containerID="20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" exitCode=143 Feb 17 16:21:09 crc kubenswrapper[4829]: I0217 16:21:09.668904 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa"} Feb 17 16:21:09 crc kubenswrapper[4829]: W0217 16:21:09.677544 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d63bbb_2d26_4b85_8241_2785a5194a21.slice/crio-b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f WatchSource:0}: Error finding container b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f: Status 404 returned error can't find the container with id b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.076851 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:52662->10.217.0.245:8775: read: connection reset by peer" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.076920 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:52648->10.217.0.245:8775: read: connection reset by peer" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.308597 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b803a04-fbc0-4844-aa4f-b8302c15024f" path="/var/lib/kubelet/pods/0b803a04-fbc0-4844-aa4f-b8302c15024f/volumes" Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.684458 4829 generic.go:334] "Generic (PLEG): container finished" podID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerID="027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" exitCode=0 Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.684540 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.686301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37d63bbb-2d26-4b85-8241-2785a5194a21","Type":"ContainerStarted","Data":"8b9f6eae650b9b2b5280896b488f52a730430d9a560030e5a10b92062d67d42d"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.686344 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"37d63bbb-2d26-4b85-8241-2785a5194a21","Type":"ContainerStarted","Data":"b70f3d2f6cd57ddb3bc45c7850ed0f901be135c19478af1c11ea4bd4d035045f"} Feb 17 16:21:10 crc kubenswrapper[4829]: I0217 16:21:10.712684 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.712659774 podStartE2EDuration="2.712659774s" podCreationTimestamp="2026-02-17 16:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:10.707446663 +0000 UTC m=+1583.124464671" watchObservedRunningTime="2026-02-17 16:21:10.712659774 +0000 UTC m=+1583.129677762" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.380854 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565861 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565937 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.565981 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.566012 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") pod \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\" (UID: \"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea\") " Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.567012 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs" (OuterVolumeSpecName: "logs") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.573781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb" (OuterVolumeSpecName: "kube-api-access-rljnb") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "kube-api-access-rljnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.601458 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data" (OuterVolumeSpecName: "config-data") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.606608 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.647514 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" (UID: "7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670103 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670150 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670166 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670179 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rljnb\" (UniqueName: \"kubernetes.io/projected/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-kube-api-access-rljnb\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.670194 4829 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.713916 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea","Type":"ContainerDied","Data":"7f678395f28b403dc65226210aa2f82c7e9fac520b66b5fae571b8af46a56688"} Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.713934 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.714262 4829 scope.go:117] "RemoveContainer" containerID="027670def26cee7dd01a660df9a39f7d4641af388ebf0406dc407101371e3b7d" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.763022 4829 scope.go:117] "RemoveContainer" containerID="953327f061f83eb4843cc581ea42d2c3534f3411211169dd2a78dadb12589e80" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.769319 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.788968 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.802380 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: E0217 16:21:11.802930 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.802948 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: E0217 16:21:11.802995 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803003 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803226 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-metadata" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.803256 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" containerName="nova-metadata-log" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.804474 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.808014 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.808192 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.837675 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890237 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890326 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890500 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.890651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993179 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993267 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993512 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.993559 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.994293 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0afa824-7a82-41cc-9274-28689e2f3f57-logs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.999172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:11 crc kubenswrapper[4829]: I0217 16:21:11.999322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.000307 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0afa824-7a82-41cc-9274-28689e2f3f57-config-data\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.008500 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4k5\" (UniqueName: \"kubernetes.io/projected/e0afa824-7a82-41cc-9274-28689e2f3f57-kube-api-access-4t4k5\") pod \"nova-metadata-0\" (UID: \"e0afa824-7a82-41cc-9274-28689e2f3f57\") " pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.125479 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.322964 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea" path="/var/lib/kubelet/pods/7fc7aee2-2e1b-43f4-bf45-2234c3f8e0ea/volumes" Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.614766 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:21:12 crc kubenswrapper[4829]: W0217 16:21:12.617146 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0afa824_7a82_41cc_9274_28689e2f3f57.slice/crio-91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6 WatchSource:0}: Error finding container 91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6: Status 404 returned error can't find the container with id 91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6 Feb 17 16:21:12 crc kubenswrapper[4829]: I0217 16:21:12.724602 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"91646f1c12a228443e7550b15d13a72c5c981ebef4949d4d1f71f77767ffdae6"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.738696 4829 generic.go:334] "Generic (PLEG): container finished" podID="ae839887-6e18-4062-bf65-95cef31fdd49" containerID="717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" exitCode=0 Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.738786 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.739317 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae839887-6e18-4062-bf65-95cef31fdd49","Type":"ContainerDied","Data":"6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.739333 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6195d7428199f8ccc33d6e9dd4a102a4c37a86e7780103db19c3d3af282a96b6" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.741938 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"830d5ac2e08e914204172ecc65baba07c733cd6fbd5a56e924f7eb7be6317787"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.741988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e0afa824-7a82-41cc-9274-28689e2f3f57","Type":"ContainerStarted","Data":"1d551bd5742f917f6e1b515eb133fdfc160b96b6b92de9274b9d3485cd2697f0"} Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.769864 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.769842388 podStartE2EDuration="2.769842388s" podCreationTimestamp="2026-02-17 16:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:13.761831803 +0000 UTC m=+1586.178849781" watchObservedRunningTime="2026-02-17 16:21:13.769842388 +0000 UTC m=+1586.186860366" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.800240 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868747 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868803 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.868906 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869056 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869125 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869173 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs" (OuterVolumeSpecName: "logs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.869736 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") pod \"ae839887-6e18-4062-bf65-95cef31fdd49\" (UID: \"ae839887-6e18-4062-bf65-95cef31fdd49\") " Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.870446 4829 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae839887-6e18-4062-bf65-95cef31fdd49-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.874587 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq" (OuterVolumeSpecName: "kube-api-access-zd5nq") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "kube-api-access-zd5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.910019 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data" (OuterVolumeSpecName: "config-data") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.918359 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.942750 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.962287 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae839887-6e18-4062-bf65-95cef31fdd49" (UID: "ae839887-6e18-4062-bf65-95cef31fdd49"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973148 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973188 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973201 4829 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973215 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd5nq\" (UniqueName: \"kubernetes.io/projected/ae839887-6e18-4062-bf65-95cef31fdd49-kube-api-access-zd5nq\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:13 crc kubenswrapper[4829]: I0217 16:21:13.973228 4829 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae839887-6e18-4062-bf65-95cef31fdd49-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.134499 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.755110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.784420 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.805023 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.821560 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: E0217 16:21:14.822329 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822360 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: E0217 16:21:14.822418 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822433 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822888 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-api" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.822915 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" containerName="nova-api-log" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.825002 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845007 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845032 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.845463 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.853806 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897599 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897675 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897739 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:14 crc kubenswrapper[4829]: I0217 16:21:14.897839 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007367 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007887 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.007981 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008006 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.008121 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.009977 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d7182c-e529-468f-8022-9fd5fc66b554-logs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.023891 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.028138 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-public-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.028257 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-config-data\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.042094 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62d7182c-e529-468f-8022-9fd5fc66b554-internal-tls-certs\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.046264 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8q6j\" (UniqueName: \"kubernetes.io/projected/62d7182c-e529-468f-8022-9fd5fc66b554-kube-api-access-c8q6j\") pod \"nova-api-0\" (UID: \"62d7182c-e529-468f-8022-9fd5fc66b554\") " pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.183149 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:21:15 crc kubenswrapper[4829]: W0217 16:21:15.686016 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62d7182c_e529_468f_8022_9fd5fc66b554.slice/crio-4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570 WatchSource:0}: Error finding container 4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570: Status 404 returned error can't find the container with id 4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570 Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.692280 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:21:15 crc kubenswrapper[4829]: I0217 16:21:15.766899 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"4f7e604bd2915b6eee62573f9c570f82e389b0c7eb4cd774b7d007444842e570"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.281516 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:16 crc kubenswrapper[4829]: E0217 16:21:16.282144 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.308972 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae839887-6e18-4062-bf65-95cef31fdd49" path="/var/lib/kubelet/pods/ae839887-6e18-4062-bf65-95cef31fdd49/volumes" Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.783702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"a8d97ed8c6afd6807abc872f429f98f5cb7e62719b360704b2aaa301cc509d46"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.783761 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62d7182c-e529-468f-8022-9fd5fc66b554","Type":"ContainerStarted","Data":"ba380e909e775b3fbd3bc14cdd75dda2ae285393e17cad1bd3158821c5f992d0"} Feb 17 16:21:16 crc kubenswrapper[4829]: I0217 16:21:16.826666 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.826647073 podStartE2EDuration="2.826647073s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:16.814364512 +0000 UTC m=+1589.231382500" watchObservedRunningTime="2026-02-17 16:21:16.826647073 +0000 UTC m=+1589.243665051" Feb 17 16:21:17 crc kubenswrapper[4829]: I0217 16:21:17.126369 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:21:17 crc kubenswrapper[4829]: I0217 16:21:17.126699 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:21:18 crc kubenswrapper[4829]: E0217 16:21:18.316930 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.759596 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807177 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807286 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807383 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.807521 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") pod \"0aced48a-e424-4579-a0f3-681531606707\" (UID: \"0aced48a-e424-4579-a0f3-681531606707\") " Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.831833 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts" (OuterVolumeSpecName: "scripts") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837734 4829 generic.go:334] "Generic (PLEG): container finished" podID="0aced48a-e424-4579-a0f3-681531606707" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" exitCode=137 Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837782 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837813 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0aced48a-e424-4579-a0f3-681531606707","Type":"ContainerDied","Data":"c4afff1a2ba6d2a5ca1bb51c6475f556a5d2736c3b4ec308f87e7a0a06dccc60"} Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837833 4829 scope.go:117] "RemoveContainer" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.837911 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.862730 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg" (OuterVolumeSpecName: "kube-api-access-hj6sg") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "kube-api-access-hj6sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.912425 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6sg\" (UniqueName: \"kubernetes.io/projected/0aced48a-e424-4579-a0f3-681531606707-kube-api-access-hj6sg\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.912466 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:18 crc kubenswrapper[4829]: I0217 16:21:18.985726 4829 scope.go:117] "RemoveContainer" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.046421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.108747 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data" (OuterVolumeSpecName: "config-data") pod "0aced48a-e424-4579-a0f3-681531606707" (UID: "0aced48a-e424-4579-a0f3-681531606707"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.109619 4829 scope.go:117] "RemoveContainer" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.127810 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.127845 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aced48a-e424-4579-a0f3-681531606707-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.135131 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.154652 4829 scope.go:117] "RemoveContainer" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.225641 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.237240 4829 scope.go:117] "RemoveContainer" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241536 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.241661 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": container with ID starting with 0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6 not found: ID does not exist" containerID="0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241724 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6"} err="failed to get container status \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": rpc error: code = NotFound desc = could not find container \"0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6\": container with ID starting with 0b1291d3c6eb3838c856cde46191262ad70993ad86538d52fa69c75a6ecfe8c6 not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.241751 4829 scope.go:117] "RemoveContainer" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.252352 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": container with ID starting with eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af not found: ID does not exist" containerID="eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252566 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af"} err="failed to get container status \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": rpc error: code = NotFound desc = could not find container \"eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af\": container with ID starting with eac6a2c6050b35f776d580ecfa733661b857e64ed27deb3135e37d55f5eb94af not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252674 4829 scope.go:117] "RemoveContainer" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.252626 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.263332 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": container with ID starting with 25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64 not found: ID does not exist" containerID="25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.263538 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64"} err="failed to get container status \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": rpc error: code = NotFound desc = could not find container \"25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64\": container with ID starting with 25b47fdfb528c0bb1e00030296b1df5f6ba3d4882399751574546eb600fc1a64 not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.263654 4829 scope.go:117] "RemoveContainer" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.271831 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": container with ID starting with 41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a not found: ID does not exist" containerID="41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.271873 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a"} err="failed to get container status \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": rpc error: code = NotFound desc = could not find container \"41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a\": container with ID starting with 41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a not found: ID does not exist" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.281821 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282285 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282297 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282307 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282313 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282331 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: E0217 16:21:19.282345 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282350 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282554 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-api" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282589 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-listener" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282602 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-notifier" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.282614 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aced48a-e424-4579-a0f3-681531606707" containerName="aodh-evaluator" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.285156 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296261 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296459 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296567 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296593 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.296761 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-j6ldx" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.330108 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332585 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.332967 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.333054 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435236 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435277 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435324 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.435354 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.445755 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-public-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.447530 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-internal-tls-certs\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.457980 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jpdm\" (UniqueName: \"kubernetes.io/projected/58d7c5e4-0195-41e6-afd9-9f31d6472d61-kube-api-access-9jpdm\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.458056 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.460740 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-scripts\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.473840 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d7c5e4-0195-41e6-afd9-9f31d6472d61-config-data\") pod \"aodh-0\" (UID: \"58d7c5e4-0195-41e6-afd9-9f31d6472d61\") " pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.645165 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 16:21:19 crc kubenswrapper[4829]: I0217 16:21:19.886897 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:21:20 crc kubenswrapper[4829]: W0217 16:21:20.145863 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d7c5e4_0195_41e6_afd9_9f31d6472d61.slice/crio-a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891 WatchSource:0}: Error finding container a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891: Status 404 returned error can't find the container with id a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891 Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.159550 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.294595 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aced48a-e424-4579-a0f3-681531606707" path="/var/lib/kubelet/pods/0aced48a-e424-4579-a0f3-681531606707/volumes" Feb 17 16:21:20 crc kubenswrapper[4829]: I0217 16:21:20.863405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"a43ab59b23f9348213aeacb6dea72635a9884b71e84a03a60ddffd60d25b1891"} Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.126780 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.127353 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:21:22 crc kubenswrapper[4829]: I0217 16:21:22.891397 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"70b2d242e7e123ca0465bb9778178ee3ee64a382e5d26bb2eaf1c75482b55605"} Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.142924 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0afa824-7a82-41cc-9274-28689e2f3f57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.142997 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e0afa824-7a82-41cc-9274-28689e2f3f57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:23 crc kubenswrapper[4829]: E0217 16:21:23.491504 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:23 crc kubenswrapper[4829]: I0217 16:21:23.905108 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"4ea447f5414056a4f47899ccee039a39288b7ce44013f7f5a59b1248929852e3"} Feb 17 16:21:24 crc kubenswrapper[4829]: I0217 16:21:24.941674 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"377114032ef56a4ca0f06c429fb23a5271744cdc92228b8cdbcfc86338e02444"} Feb 17 16:21:24 crc kubenswrapper[4829]: I0217 16:21:24.942239 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58d7c5e4-0195-41e6-afd9-9f31d6472d61","Type":"ContainerStarted","Data":"0bac6265fd29394b09a25f49ceca7d9bf6cc526664a5709395333282e748b99f"} Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.022327 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.808055622 podStartE2EDuration="6.022295281s" podCreationTimestamp="2026-02-17 16:21:19 +0000 UTC" firstStartedPulling="2026-02-17 16:21:20.149693846 +0000 UTC m=+1592.566711844" lastFinishedPulling="2026-02-17 16:21:24.363933535 +0000 UTC m=+1596.780951503" observedRunningTime="2026-02-17 16:21:25.013801793 +0000 UTC m=+1597.430819781" watchObservedRunningTime="2026-02-17 16:21:25.022295281 +0000 UTC m=+1597.439313269" Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.183667 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:25 crc kubenswrapper[4829]: I0217 16:21:25.183718 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:21:26 crc kubenswrapper[4829]: I0217 16:21:26.194728 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62d7182c-e529-468f-8022-9fd5fc66b554" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:26 crc kubenswrapper[4829]: I0217 16:21:26.194737 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62d7182c-e529-468f-8022-9fd5fc66b554" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:21:27 crc kubenswrapper[4829]: I0217 16:21:27.279945 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:27 crc kubenswrapper[4829]: E0217 16:21:27.280380 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:28 crc kubenswrapper[4829]: E0217 16:21:28.653406 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:29 crc kubenswrapper[4829]: I0217 16:21:29.254142 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.140668 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.142890 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.149197 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:21:32 crc kubenswrapper[4829]: I0217 16:21:32.159235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.032383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.032867 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" containerID="cri-o://1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" gracePeriod=30 Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.206101 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.206348 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" containerID="cri-o://310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" gracePeriod=30 Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.675648 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.839192 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") pod \"2003bd16-d251-4004-9eca-9e47fb54e514\" (UID: \"2003bd16-d251-4004-9eca-9e47fb54e514\") " Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.849323 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk" (OuterVolumeSpecName: "kube-api-access-n4pdk") pod "2003bd16-d251-4004-9eca-9e47fb54e514" (UID: "2003bd16-d251-4004-9eca-9e47fb54e514"). InnerVolumeSpecName "kube-api-access-n4pdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.934140 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:34 crc kubenswrapper[4829]: I0217 16:21:34.944643 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4pdk\" (UniqueName: \"kubernetes.io/projected/2003bd16-d251-4004-9eca-9e47fb54e514-kube-api-access-n4pdk\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046399 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046806 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.046833 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") pod \"b4cfa907-6caa-41a9-b86a-371fd960e471\" (UID: \"b4cfa907-6caa-41a9-b86a-371fd960e471\") " Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.055033 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8" (OuterVolumeSpecName: "kube-api-access-w6tr8") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "kube-api-access-w6tr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076010 4829 generic.go:334] "Generic (PLEG): container finished" podID="2003bd16-d251-4004-9eca-9e47fb54e514" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" exitCode=2 Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076083 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerDied","Data":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076114 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2003bd16-d251-4004-9eca-9e47fb54e514","Type":"ContainerDied","Data":"f3acf26671b1c6832da4bfa6831eef246a277a881f398330cbffb2987336361d"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076133 4829 scope.go:117] "RemoveContainer" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.076272 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.080678 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082880 4829 generic.go:334] "Generic (PLEG): container finished" podID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" exitCode=2 Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082923 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082928 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerDied","Data":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.082962 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b4cfa907-6caa-41a9-b86a-371fd960e471","Type":"ContainerDied","Data":"16d0efc5b15b7553e7e19ac3d437aa06659539c98061b36c28ebb604339b5b7c"} Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.122199 4829 scope.go:117] "RemoveContainer" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.124480 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": container with ID starting with 1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4 not found: ID does not exist" containerID="1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.124541 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4"} err="failed to get container status \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": rpc error: code = NotFound desc = could not find container \"1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4\": container with ID starting with 1257ee6929cde46c3aa9ad19fb6990e919a6ec396bfca1cda8eb14189691b2b4 not found: ID does not exist" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.124592 4829 scope.go:117] "RemoveContainer" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.126119 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.146453 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data" (OuterVolumeSpecName: "config-data") pod "b4cfa907-6caa-41a9-b86a-371fd960e471" (UID: "b4cfa907-6caa-41a9-b86a-371fd960e471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149689 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149717 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cfa907-6caa-41a9-b86a-371fd960e471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.149727 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6tr8\" (UniqueName: \"kubernetes.io/projected/b4cfa907-6caa-41a9-b86a-371fd960e471-kube-api-access-w6tr8\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.162503 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.198497 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.200055 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.201655 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203011 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.203606 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203620 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.203642 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.203648 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.204235 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" containerName="kube-state-metrics" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.204280 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" containerName="mysqld-exporter" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.205599 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.208021 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.209692 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.212856 4829 scope.go:117] "RemoveContainer" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: E0217 16:21:35.215055 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": container with ID starting with 310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088 not found: ID does not exist" containerID="310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.215094 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088"} err="failed to get container status \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": rpc error: code = NotFound desc = could not find container \"310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088\": container with ID starting with 310c74e282fc3a9da0e2e36b81f215288c790f1925126ccfdb08d29e19c5a088 not found: ID does not exist" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.224373 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.228321 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.361486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.362327 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.362521 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.363007 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.417916 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.433327 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.447125 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.449112 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.452284 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.452490 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.464876 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465032 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465135 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.465984 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.472730 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.474020 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.484876 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57285ef-f362-4fb7-8f6c-633698507b3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.487862 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99lv\" (UniqueName: \"kubernetes.io/projected/f57285ef-f362-4fb7-8f6c-633698507b3d-kube-api-access-c99lv\") pod \"kube-state-metrics-0\" (UID: \"f57285ef-f362-4fb7-8f6c-633698507b3d\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.567782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568399 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568552 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.568671 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.590268 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671007 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671095 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.671312 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.675097 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.675627 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.676022 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-config-data\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.697324 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8l9\" (UniqueName: \"kubernetes.io/projected/e39a0dce-4da5-4ff4-9e50-e2dc41d22092-kube-api-access-mk8l9\") pod \"mysqld-exporter-0\" (UID: \"e39a0dce-4da5-4ff4-9e50-e2dc41d22092\") " pod="openstack/mysqld-exporter-0" Feb 17 16:21:35 crc kubenswrapper[4829]: I0217 16:21:35.889635 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.096015 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:36 crc kubenswrapper[4829]: W0217 16:21:36.105024 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57285ef_f362_4fb7_8f6c_633698507b3d.slice/crio-ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484 WatchSource:0}: Error finding container ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484: Status 404 returned error can't find the container with id ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.112991 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.120341 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.295039 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2003bd16-d251-4004-9eca-9e47fb54e514" path="/var/lib/kubelet/pods/2003bd16-d251-4004-9eca-9e47fb54e514/volumes" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.296032 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cfa907-6caa-41a9-b86a-371fd960e471" path="/var/lib/kubelet/pods/b4cfa907-6caa-41a9-b86a-371fd960e471/volumes" Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362311 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362596 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" containerID="cri-o://a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362656 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" containerID="cri-o://24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362703 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" containerID="cri-o://c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.362664 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" containerID="cri-o://26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" gracePeriod=30 Feb 17 16:21:36 crc kubenswrapper[4829]: I0217 16:21:36.406758 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.130266 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e39a0dce-4da5-4ff4-9e50-e2dc41d22092","Type":"ContainerStarted","Data":"c11694e0707d2732fd1be5cd70d589074588b1a7d6ac63ffb9700e8c895bdf4b"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.130317 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e39a0dce-4da5-4ff4-9e50-e2dc41d22092","Type":"ContainerStarted","Data":"d23a30b732d3550e7f4fd9d33de0bb2e06d49f52f74bf2c1f1b70b86fa8d355f"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135107 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" exitCode=0 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135153 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" exitCode=2 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135162 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" exitCode=0 Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135223 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135259 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.135272 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.137672 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f57285ef-f362-4fb7-8f6c-633698507b3d","Type":"ContainerStarted","Data":"c269891f6d51656027160994fcc1575421835dc5b64fd93373cd5c08654cab89"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.137734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f57285ef-f362-4fb7-8f6c-633698507b3d","Type":"ContainerStarted","Data":"ca10c9a8283b6f8a3e9739dc4fadf52c2249f1cae1c2703f3b2ed565d78a2484"} Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.159975 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.705210744 podStartE2EDuration="2.159950738s" podCreationTimestamp="2026-02-17 16:21:35 +0000 UTC" firstStartedPulling="2026-02-17 16:21:36.416792478 +0000 UTC m=+1608.833810456" lastFinishedPulling="2026-02-17 16:21:36.871532472 +0000 UTC m=+1609.288550450" observedRunningTime="2026-02-17 16:21:37.149686942 +0000 UTC m=+1609.566704940" watchObservedRunningTime="2026-02-17 16:21:37.159950738 +0000 UTC m=+1609.576968716" Feb 17 16:21:37 crc kubenswrapper[4829]: I0217 16:21:37.200966 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.732849518 podStartE2EDuration="2.200943381s" podCreationTimestamp="2026-02-17 16:21:35 +0000 UTC" firstStartedPulling="2026-02-17 16:21:36.108037875 +0000 UTC m=+1608.525055843" lastFinishedPulling="2026-02-17 16:21:36.576131728 +0000 UTC m=+1608.993149706" observedRunningTime="2026-02-17 16:21:37.174039427 +0000 UTC m=+1609.591057405" watchObservedRunningTime="2026-02-17 16:21:37.200943381 +0000 UTC m=+1609.617961359" Feb 17 16:21:38 crc kubenswrapper[4829]: I0217 16:21:38.148894 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:21:38 crc kubenswrapper[4829]: E0217 16:21:38.501907 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:38 crc kubenswrapper[4829]: E0217 16:21:38.731828 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.146547 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.192663 4829 generic.go:334] "Generic (PLEG): container finished" podID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" exitCode=0 Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.195873 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196770 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2","Type":"ContainerDied","Data":"917e80d190c9f417c6d7ad24e1ab772a0f50f28f3fab4aadaa2a3c83b5714c95"} Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.196790 4829 scope.go:117] "RemoveContainer" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.256242 4829 scope.go:117] "RemoveContainer" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269273 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269324 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269506 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269684 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269745 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269801 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.269820 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") pod \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\" (UID: \"2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2\") " Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.271501 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.271823 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.277877 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts" (OuterVolumeSpecName: "scripts") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.277976 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn" (OuterVolumeSpecName: "kube-api-access-96skn") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "kube-api-access-96skn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.280279 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.280877 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.288865 4829 scope.go:117] "RemoveContainer" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.311781 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375681 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375904 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375916 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96skn\" (UniqueName: \"kubernetes.io/projected/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-kube-api-access-96skn\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375924 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.375932 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.380524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.406149 4829 scope.go:117] "RemoveContainer" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.442355 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data" (OuterVolumeSpecName: "config-data") pod "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" (UID: "2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444229 4829 scope.go:117] "RemoveContainer" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.444807 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": container with ID starting with 24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d not found: ID does not exist" containerID="24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444848 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d"} err="failed to get container status \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": rpc error: code = NotFound desc = could not find container \"24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d\": container with ID starting with 24c359e56ca0512b9e5eafb6416901ee1e04749d2027957659b255d9240ef17d not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.444873 4829 scope.go:117] "RemoveContainer" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445196 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": container with ID starting with 26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66 not found: ID does not exist" containerID="26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445229 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66"} err="failed to get container status \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": rpc error: code = NotFound desc = could not find container \"26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66\": container with ID starting with 26f494d6dc2ad74ef4bbbb96b75339a0f07090f8815fe390dec71a218b9ccf66 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445249 4829 scope.go:117] "RemoveContainer" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445483 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": container with ID starting with c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1 not found: ID does not exist" containerID="c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445505 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1"} err="failed to get container status \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": rpc error: code = NotFound desc = could not find container \"c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1\": container with ID starting with c2a1a880e69963b79327a1fa843b3170dd0d99cd29485a5978531bc337315ad1 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445516 4829 scope.go:117] "RemoveContainer" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.445817 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": container with ID starting with a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297 not found: ID does not exist" containerID="a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.445840 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297"} err="failed to get container status \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": rpc error: code = NotFound desc = could not find container \"a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297\": container with ID starting with a96f30afd75ccfb95e5445e3d6a6de532f524c5124c8a18ef8d4777071f0a297 not found: ID does not exist" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.477931 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.477993 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.535383 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.558618 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.571745 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572318 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572335 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572349 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572355 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572378 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572385 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: E0217 16:21:39.572402 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572408 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572633 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="proxy-httpd" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572657 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-central-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572674 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="ceilometer-notification-agent" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.572686 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" containerName="sg-core" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.574745 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.577841 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.578029 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.580262 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.590673 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681954 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.681975 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682395 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682483 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682726 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682755 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.682858 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784264 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784315 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784387 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784594 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.784611 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.785189 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.787061 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789067 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789284 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.789314 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.793746 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.793747 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.806845 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"ceilometer-0\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " pod="openstack/ceilometer-0" Feb 17 16:21:39 crc kubenswrapper[4829]: I0217 16:21:39.893638 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:21:40 crc kubenswrapper[4829]: I0217 16:21:40.295743 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2" path="/var/lib/kubelet/pods/2e98bc7f-f531-46d0-8830-a4e0fcb3d8f2/volumes" Feb 17 16:21:40 crc kubenswrapper[4829]: I0217 16:21:40.430998 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:21:41 crc kubenswrapper[4829]: I0217 16:21:41.221198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} Feb 17 16:21:41 crc kubenswrapper[4829]: I0217 16:21:41.222190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"d8d11c7e5bc799f3b0a7fe14e7081721edd114e2dc2bdd16476077b9f7c7412d"} Feb 17 16:21:42 crc kubenswrapper[4829]: I0217 16:21:42.261879 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} Feb 17 16:21:43 crc kubenswrapper[4829]: I0217 16:21:43.278472 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.304384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerStarted","Data":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.304768 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.331122 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.73760782 podStartE2EDuration="6.331100676s" podCreationTimestamp="2026-02-17 16:21:39 +0000 UTC" firstStartedPulling="2026-02-17 16:21:40.432397758 +0000 UTC m=+1612.849415736" lastFinishedPulling="2026-02-17 16:21:44.025890614 +0000 UTC m=+1616.442908592" observedRunningTime="2026-02-17 16:21:45.326173363 +0000 UTC m=+1617.743191341" watchObservedRunningTime="2026-02-17 16:21:45.331100676 +0000 UTC m=+1617.748118654" Feb 17 16:21:45 crc kubenswrapper[4829]: I0217 16:21:45.606152 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.256620 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.256828 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aced48a_e424_4579_a0f3_681531606707.slice/crio-41f81b7a49ae4644fe95d993e951316147407fe22675c302581a7dac92b57d2a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:21:48 crc kubenswrapper[4829]: E0217 16:21:48.349756 4829 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7f836c5e6c4dc8ae142ea06fb1094515b55e687113f4883084160fc00bddb596/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7f836c5e6c4dc8ae142ea06fb1094515b55e687113f4883084160fc00bddb596/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_aodh-0_0aced48a-e424-4579-a0f3-681531606707/aodh-api/0.log" to get inode usage: stat /var/log/pods/openstack_aodh-0_0aced48a-e424-4579-a0f3-681531606707/aodh-api/0.log: no such file or directory Feb 17 16:21:51 crc kubenswrapper[4829]: I0217 16:21:51.280393 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:21:51 crc kubenswrapper[4829]: E0217 16:21:51.281539 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:04 crc kubenswrapper[4829]: I0217 16:22:04.279669 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:04 crc kubenswrapper[4829]: E0217 16:22:04.280790 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:09 crc kubenswrapper[4829]: I0217 16:22:09.908215 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:22:17 crc kubenswrapper[4829]: I0217 16:22:17.280995 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:17 crc kubenswrapper[4829]: E0217 16:22:17.283797 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:21 crc kubenswrapper[4829]: I0217 16:22:21.977500 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:22:21 crc kubenswrapper[4829]: I0217 16:22:21.999409 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.013637 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-89gpt"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.025378 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-mgkjx"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.044157 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.046432 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.054319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125695 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125831 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.125953 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228370 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228482 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.228586 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.234706 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-combined-ca-bundle\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.235412 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7091b35-889b-422b-aead-117292847a8a-config-data\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.260304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk5m\" (UniqueName: \"kubernetes.io/projected/a7091b35-889b-422b-aead-117292847a8a-kube-api-access-kqk5m\") pod \"heat-db-sync-qptzd\" (UID: \"a7091b35-889b-422b-aead-117292847a8a\") " pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.293982 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d3ed60-8c68-44ec-aaa1-806b5aec5df1" path="/var/lib/kubelet/pods/79d3ed60-8c68-44ec-aaa1-806b5aec5df1/volumes" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.295061 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89e689f-68fd-4357-a2a0-1d4b8d130702" path="/var/lib/kubelet/pods/c89e689f-68fd-4357-a2a0-1d4b8d130702/volumes" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.371188 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qptzd" Feb 17 16:22:22 crc kubenswrapper[4829]: I0217 16:22:22.981638 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qptzd"] Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111791 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111854 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.111997 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.113333 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:23 crc kubenswrapper[4829]: I0217 16:22:23.796811 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:23 crc kubenswrapper[4829]: I0217 16:22:23.871339 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qptzd" event={"ID":"a7091b35-889b-422b-aead-117292847a8a","Type":"ContainerStarted","Data":"b2493eae309be4cd73f62f5acf506639f826fdfee8d1c7942d3e2c20faed1b14"} Feb 17 16:22:23 crc kubenswrapper[4829]: E0217 16:22:23.873396 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219381 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219748 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" containerID="cri-o://77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219894 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" containerID="cri-o://508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219944 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" containerID="cri-o://0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.219984 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" containerID="cri-o://99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" gracePeriod=30 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883118 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" exitCode=0 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883402 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" exitCode=2 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883411 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" exitCode=0 Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883205 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883511 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.883526 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} Feb 17 16:22:24 crc kubenswrapper[4829]: E0217 16:22:24.885158 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:24 crc kubenswrapper[4829]: I0217 16:22:24.899371 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:28 crc kubenswrapper[4829]: I0217 16:22:28.298673 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:28 crc kubenswrapper[4829]: E0217 16:22:28.299546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:28 crc kubenswrapper[4829]: I0217 16:22:28.386261 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" containerID="cri-o://6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" gracePeriod=604796 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.724704 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" containerID="cri-o://1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" gracePeriod=604796 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.827725 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937512 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937732 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937869 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937917 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.937955 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938121 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938210 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") pod \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\" (UID: \"4fe2d3ad-54aa-4d5c-b875-2683ed774353\") " Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938211 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.938974 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.939652 4829 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.939685 4829 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fe2d3ad-54aa-4d5c-b875-2683ed774353-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.943700 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw" (OuterVolumeSpecName: "kube-api-access-dqrvw") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "kube-api-access-dqrvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.944697 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts" (OuterVolumeSpecName: "scripts") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.945939 4829 generic.go:334] "Generic (PLEG): container finished" podID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" exitCode=0 Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.945987 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946022 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fe2d3ad-54aa-4d5c-b875-2683ed774353","Type":"ContainerDied","Data":"d8d11c7e5bc799f3b0a7fe14e7081721edd114e2dc2bdd16476077b9f7c7412d"} Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946047 4829 scope.go:117] "RemoveContainer" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.946058 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:29 crc kubenswrapper[4829]: I0217 16:22:29.977289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.026232 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.041219 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043676 4829 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043709 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043756 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrvw\" (UniqueName: \"kubernetes.io/projected/4fe2d3ad-54aa-4d5c-b875-2683ed774353-kube-api-access-dqrvw\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043770 4829 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.043778 4829 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.056743 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data" (OuterVolumeSpecName: "config-data") pod "4fe2d3ad-54aa-4d5c-b875-2683ed774353" (UID: "4fe2d3ad-54aa-4d5c-b875-2683ed774353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.089684 4829 scope.go:117] "RemoveContainer" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.117185 4829 scope.go:117] "RemoveContainer" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.144644 4829 scope.go:117] "RemoveContainer" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.147485 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe2d3ad-54aa-4d5c-b875-2683ed774353-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.173738 4829 scope.go:117] "RemoveContainer" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.174283 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": container with ID starting with 508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab not found: ID does not exist" containerID="508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.174327 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab"} err="failed to get container status \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": rpc error: code = NotFound desc = could not find container \"508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab\": container with ID starting with 508790da025c6da1c15a7c39d978047cd81b4a3cfeb0191ee78badfcd03ec2ab not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.174357 4829 scope.go:117] "RemoveContainer" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.174886 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": container with ID starting with 0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936 not found: ID does not exist" containerID="0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.175363 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936"} err="failed to get container status \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": rpc error: code = NotFound desc = could not find container \"0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936\": container with ID starting with 0ec540dd8c28f2525a3891639c91f0b76f24a03e8850f25516159bd42e1dd936 not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.175797 4829 scope.go:117] "RemoveContainer" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.176708 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": container with ID starting with 99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c not found: ID does not exist" containerID="99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.176744 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c"} err="failed to get container status \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": rpc error: code = NotFound desc = could not find container \"99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c\": container with ID starting with 99cb1acb1660e087fb25f3c09905c1eabd201308a6709e1a191cd22246fa4d9c not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.176767 4829 scope.go:117] "RemoveContainer" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.177115 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": container with ID starting with 77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181 not found: ID does not exist" containerID="77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.177131 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181"} err="failed to get container status \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": rpc error: code = NotFound desc = could not find container \"77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181\": container with ID starting with 77aada026b783d79179dde2374614236ad7ec24785afb7da35528a6aa91f7181 not found: ID does not exist" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.328307 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.348412 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.367757 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368419 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368440 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368458 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368467 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368486 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368496 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: E0217 16:22:30.368526 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368534 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368815 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-central-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368849 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="proxy-httpd" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368873 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="sg-core" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.368890 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" containerName="ceilometer-notification-agent" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.371521 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.374813 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.375036 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.375211 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.402605 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453568 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453625 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453650 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453690 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453751 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453778 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453796 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.453830 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556317 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556440 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556509 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556751 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.556997 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557085 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557138 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557250 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557448 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-log-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.557906 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e01f505e-09de-4b7d-ae8a-b9f392c3b592-run-httpd\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.561057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.561510 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-config-data\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.562519 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.564297 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.575373 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e01f505e-09de-4b7d-ae8a-b9f392c3b592-scripts\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.575742 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlgx\" (UniqueName: \"kubernetes.io/projected/e01f505e-09de-4b7d-ae8a-b9f392c3b592-kube-api-access-mvlgx\") pod \"ceilometer-0\" (UID: \"e01f505e-09de-4b7d-ae8a-b9f392c3b592\") " pod="openstack/ceilometer-0" Feb 17 16:22:30 crc kubenswrapper[4829]: I0217 16:22:30.720544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:22:31 crc kubenswrapper[4829]: W0217 16:22:31.304942 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01f505e_09de_4b7d_ae8a_b9f392c3b592.slice/crio-ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7 WatchSource:0}: Error finding container ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7: Status 404 returned error can't find the container with id ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7 Feb 17 16:22:31 crc kubenswrapper[4829]: I0217 16:22:31.309862 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.424960 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.425022 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:31 crc kubenswrapper[4829]: E0217 16:22:31.425159 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:31 crc kubenswrapper[4829]: I0217 16:22:31.974695 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"ddf82c45c8169112afd27bd07b7b19ef95187e50900a4acf0a21356e03aac4b7"} Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.312886 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe2d3ad-54aa-4d5c-b875-2683ed774353" path="/var/lib/kubelet/pods/4fe2d3ad-54aa-4d5c-b875-2683ed774353/volumes" Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.997907 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"cbe778ccec508c84598a4abeef47ed9a0768c53d6ccce4ed245fb45058a970d7"} Feb 17 16:22:32 crc kubenswrapper[4829]: I0217 16:22:32.998202 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"786e109818c6005753b8c470c8e72a7b694be9c3948e59c5789ef8477a177bc4"} Feb 17 16:22:34 crc kubenswrapper[4829]: E0217 16:22:34.453666 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.029081 4829 generic.go:334] "Generic (PLEG): container finished" podID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerID="6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" exitCode=0 Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.029187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00"} Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.033140 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e01f505e-09de-4b7d-ae8a-b9f392c3b592","Type":"ContainerStarted","Data":"6363ba2128e84ecbd3d2bf246f5413ef29b9ca0801b406d6dbbf538246845237"} Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.033338 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:22:35 crc kubenswrapper[4829]: E0217 16:22:35.034855 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.114792 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.182597 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.182645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.183298 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186674 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186743 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186798 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186840 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.186975 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187020 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187041 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187063 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") pod \"257c3943-bfcb-409b-a915-bacfd95d9c93\" (UID: \"257c3943-bfcb-409b-a915-bacfd95d9c93\") " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.187599 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.188393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.189366 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf" (OuterVolumeSpecName: "kube-api-access-n8ndf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "kube-api-access-n8ndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.189832 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.206928 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.207067 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.207758 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info" (OuterVolumeSpecName: "pod-info") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.230216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data" (OuterVolumeSpecName: "config-data") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.249274 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33" (OuterVolumeSpecName: "persistence") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294358 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8ndf\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-kube-api-access-n8ndf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294407 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") on node \"crc\" " Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294420 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294430 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294439 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/257c3943-bfcb-409b-a915-bacfd95d9c93-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294450 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294460 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/257c3943-bfcb-409b-a915-bacfd95d9c93-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.294470 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.295693 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf" (OuterVolumeSpecName: "server-conf") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.319852 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.358954 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.359100 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33") on node "crc" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.391393 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "257c3943-bfcb-409b-a915-bacfd95d9c93" (UID: "257c3943-bfcb-409b-a915-bacfd95d9c93"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397088 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397127 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/257c3943-bfcb-409b-a915-bacfd95d9c93-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4829]: I0217 16:22:35.397140 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/257c3943-bfcb-409b-a915-bacfd95d9c93-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.058352 4829 generic.go:334] "Generic (PLEG): container finished" podID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerID="1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" exitCode=0 Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.058454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d"} Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062758 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062803 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"257c3943-bfcb-409b-a915-bacfd95d9c93","Type":"ContainerDied","Data":"c1327976e829e36bf707aace77ba8b36b9e8ee9ae74bf54cf9dec45e5ad0042e"} Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.062887 4829 scope.go:117] "RemoveContainer" containerID="6c1c9987764f4c268e12c41d090148b50fb91b3372b89e6153a205fb381e0c00" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.066152 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.091538 4829 scope.go:117] "RemoveContainer" containerID="b5602481d6956e261006c019d83b56aa20b80a7b5986acf5259ea25395fb242b" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.128630 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.140543 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.166077 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.168026 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168095 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.168106 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="setup-container" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168113 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="setup-container" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.168709 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" containerName="rabbitmq" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.170646 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.205106 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.322524 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.334081 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257c3943-bfcb-409b-a915-bacfd95d9c93" path="/var/lib/kubelet/pods/257c3943-bfcb-409b-a915-bacfd95d9c93/volumes" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.345942 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346063 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346121 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346172 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.346195 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350721 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350759 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350825 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.350984 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.351021 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395662 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395720 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.395830 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:36 crc kubenswrapper[4829]: E0217 16:22:36.398945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453529 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453689 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453776 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453825 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453849 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453874 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453905 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453937 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.453979 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.454004 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.454106 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.455617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.455646 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.457654 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-config-data\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.457909 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.460228 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13860a28-5cd6-4bf9-b60b-3872c76444a8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.463168 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.463607 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13860a28-5cd6-4bf9-b60b-3872c76444a8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.465064 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.465859 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13860a28-5cd6-4bf9-b60b-3872c76444a8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.466562 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.466661 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cec88d4327ff12753cbf1d7636d4616ad5b51e6f71f7c68ee07d08bc8a1cc1e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.479716 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmhl\" (UniqueName: \"kubernetes.io/projected/13860a28-5cd6-4bf9-b60b-3872c76444a8-kube-api-access-glmhl\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.538320 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4dbac7e5-1658-4194-afda-e4b466ec1e33\") pod \"rabbitmq-server-2\" (UID: \"13860a28-5cd6-4bf9-b60b-3872c76444a8\") " pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.567739 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761021 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761351 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.761544 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762490 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762523 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.762901 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763648 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763678 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763741 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763776 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763848 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.763967 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") pod \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\" (UID: \"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d\") " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.764822 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info" (OuterVolumeSpecName: "pod-info") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765067 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765083 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.765164 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.766752 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.767190 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.772868 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk" (OuterVolumeSpecName: "kube-api-access-d5wnk") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "kube-api-access-d5wnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.774759 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.794289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4" (OuterVolumeSpecName: "persistence") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.812489 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data" (OuterVolumeSpecName: "config-data") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.830215 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.844158 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf" (OuterVolumeSpecName: "server-conf") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870087 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870119 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870180 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5wnk\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-kube-api-access-d5wnk\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870288 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") on node \"crc\" " Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870307 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870319 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870358 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.870369 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.894899 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" (UID: "d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.946970 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.947330 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4") on node "crc" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.972476 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:36 crc kubenswrapper[4829]: I0217 16:22:36.972510 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074621 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d","Type":"ContainerDied","Data":"aaae72efaf261c32949e4da7436a82ede517cf555275d36c504a706eeb99a3cb"} Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074690 4829 scope.go:117] "RemoveContainer" containerID="1bac383ecf25ff52c54ee0ef16eb6931792ce901d0f3ba3bd333f7a02176125d" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.074629 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.111686 4829 scope.go:117] "RemoveContainer" containerID="6f70efc094a6a4e60eb282dbd537ad0a77c7eac129d5e6540f310253409325d8" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.135723 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.143427 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.160620 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: E0217 16:22:37.161182 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161201 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: E0217 16:22:37.161221 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="setup-container" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161229 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="setup-container" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.161465 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" containerName="rabbitmq" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.173101 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.179878 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.179924 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180116 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180268 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9x5xf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.180368 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181121 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181132 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.181734 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.278768 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279181 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279257 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279289 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279318 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279352 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279379 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279400 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.279517 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.308751 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 17 16:22:37 crc kubenswrapper[4829]: W0217 16:22:37.308824 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13860a28_5cd6_4bf9_b60b_3872c76444a8.slice/crio-6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6 WatchSource:0}: Error finding container 6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6: Status 404 returned error can't find the container with id 6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6 Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382133 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382181 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382209 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382253 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382275 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382291 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382313 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382385 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382444 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382473 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.382505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383051 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383065 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.383639 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.384433 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4c6b5337-789c-48a9-b772-3d96b64640e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.385710 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.385736 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c712c179c4211caeb2d08f251b409f456d9a156c71e8c917f92effa050520833/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.387122 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4c6b5337-789c-48a9-b772-3d96b64640e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.387926 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.388336 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4c6b5337-789c-48a9-b772-3d96b64640e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.389304 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.406154 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kjzt\" (UniqueName: \"kubernetes.io/projected/4c6b5337-789c-48a9-b772-3d96b64640e6-kube-api-access-2kjzt\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.437750 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5d57c8c-4f26-424b-9fe3-00cebb4244f4\") pod \"rabbitmq-cell1-server-0\" (UID: \"4c6b5337-789c-48a9-b772-3d96b64640e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:37 crc kubenswrapper[4829]: I0217 16:22:37.576901 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.091826 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"6c849935e0f21d4d1047dd480b8f33cc4ded756d13cbd6e9de9c52f8e94e3ef6"} Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.143745 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:22:38 crc kubenswrapper[4829]: W0217 16:22:38.146142 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6b5337_789c_48a9_b772_3d96b64640e6.slice/crio-2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3 WatchSource:0}: Error finding container 2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3: Status 404 returned error can't find the container with id 2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3 Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.312546 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d" path="/var/lib/kubelet/pods/d18c52f3-efc1-4a9b-a7b0-b19bc419dd4d/volumes" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.507645 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.512544 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.520125 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.548648 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.611850 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.611927 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612039 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612067 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612134 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612173 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.612211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.713633 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.713945 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714019 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714058 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714093 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714118 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714156 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.714539 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715075 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715143 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715695 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.715821 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.716208 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.746442 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"dnsmasq-dns-594cb89c79-scz5z\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:38 crc kubenswrapper[4829]: I0217 16:22:38.842701 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.103282 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"2e317d3b715e53f5972796b25d6c52c8e1b1a81682f4cf040518ec81da5921e3"} Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.106221 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28"} Feb 17 16:22:39 crc kubenswrapper[4829]: I0217 16:22:39.336146 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:39 crc kubenswrapper[4829]: W0217 16:22:39.339562 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9656ce3d_4ce5_4e96_8d26_ceb6f4e27a99.slice/crio-6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77 WatchSource:0}: Error finding container 6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77: Status 404 returned error can't find the container with id 6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77 Feb 17 16:22:40 crc kubenswrapper[4829]: I0217 16:22:40.123286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} Feb 17 16:22:40 crc kubenswrapper[4829]: I0217 16:22:40.123641 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.138281 4829 generic.go:334] "Generic (PLEG): container finished" podID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" exitCode=0 Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.138385 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.140537 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855"} Feb 17 16:22:41 crc kubenswrapper[4829]: I0217 16:22:41.281038 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:41 crc kubenswrapper[4829]: E0217 16:22:41.281441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:42 crc kubenswrapper[4829]: I0217 16:22:42.166232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerStarted","Data":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} Feb 17 16:22:42 crc kubenswrapper[4829]: I0217 16:22:42.205447 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" podStartSLOduration=4.205417991 podStartE2EDuration="4.205417991s" podCreationTimestamp="2026-02-17 16:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:42.18980869 +0000 UTC m=+1674.606826718" watchObservedRunningTime="2026-02-17 16:22:42.205417991 +0000 UTC m=+1674.622435979" Feb 17 16:22:43 crc kubenswrapper[4829]: I0217 16:22:43.181722 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:47 crc kubenswrapper[4829]: I0217 16:22:47.296829 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422386 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422486 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.422745 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:22:47 crc kubenswrapper[4829]: E0217 16:22:47.424154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:48 crc kubenswrapper[4829]: E0217 16:22:48.260074 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.844800 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.935906 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:48 crc kubenswrapper[4829]: I0217 16:22:48.936409 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" containerID="cri-o://5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" gracePeriod=10 Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.146153 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.148265 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.162418 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.292526 4829 generic.go:334] "Generic (PLEG): container finished" podID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerID="5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" exitCode=0 Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.292818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e"} Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304190 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304211 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304296 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304324 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.304353 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406149 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406193 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406254 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406292 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406322 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406371 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.406547 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.407453 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-config\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.407965 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408254 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408551 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.408922 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.409617 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.430985 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsv2\" (UniqueName: \"kubernetes.io/projected/de1b2a48-73a6-48b7-94d8-1c24530f4d2b-kube-api-access-2jsv2\") pod \"dnsmasq-dns-5596c69fcc-hfgfn\" (UID: \"de1b2a48-73a6-48b7-94d8-1c24530f4d2b\") " pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.473479 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.708093 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814422 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814533 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814622 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814673 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814781 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.814897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") pod \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\" (UID: \"3fdb8e01-6d92-47be-a6a8-4d2e39d42152\") " Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.828327 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4" (OuterVolumeSpecName: "kube-api-access-jvqs4") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "kube-api-access-jvqs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.894413 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.917364 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.917396 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvqs4\" (UniqueName: \"kubernetes.io/projected/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-kube-api-access-jvqs4\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.927332 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config" (OuterVolumeSpecName: "config") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.933831 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.940478 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:49 crc kubenswrapper[4829]: I0217 16:22:49.953144 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fdb8e01-6d92-47be-a6a8-4d2e39d42152" (UID: "3fdb8e01-6d92-47be-a6a8-4d2e39d42152"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019922 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019961 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019978 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.019991 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fdb8e01-6d92-47be-a6a8-4d2e39d42152-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.160813 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hfgfn"] Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.318104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerStarted","Data":"658c262b31dab8fa64bda70171117a69b0cd30958700a8f068147d23f7aff478"} Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.321678 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" event={"ID":"3fdb8e01-6d92-47be-a6a8-4d2e39d42152","Type":"ContainerDied","Data":"9ffc35f3ee01d1035d556620fea766ea2c01f0cbdb7a20c299c532e63cbdcaee"} Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.321760 4829 scope.go:117] "RemoveContainer" containerID="5612a95a4d0063d6925f0f9c1093228a56b1c7561b2493b73de1f3f85602093e" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.322001 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-cq899" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.380255 4829 scope.go:117] "RemoveContainer" containerID="d27a3e7ff4c578134cfc75f05c01c01bfbf62aff36f8812227638d6f01aa6d68" Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.415818 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:50 crc kubenswrapper[4829]: I0217 16:22:50.428302 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-cq899"] Feb 17 16:22:51 crc kubenswrapper[4829]: E0217 16:22:51.280895 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:22:51 crc kubenswrapper[4829]: I0217 16:22:51.338103 4829 generic.go:334] "Generic (PLEG): container finished" podID="de1b2a48-73a6-48b7-94d8-1c24530f4d2b" containerID="4e9bde6d42e9871da8ffb869aabe5aeb3dbe328d0f62ce7ae655427b1a6286b9" exitCode=0 Feb 17 16:22:51 crc kubenswrapper[4829]: I0217 16:22:51.338207 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerDied","Data":"4e9bde6d42e9871da8ffb869aabe5aeb3dbe328d0f62ce7ae655427b1a6286b9"} Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.281215 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:22:52 crc kubenswrapper[4829]: E0217 16:22:52.282469 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.307825 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" path="/var/lib/kubelet/pods/3fdb8e01-6d92-47be-a6a8-4d2e39d42152/volumes" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.365649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" event={"ID":"de1b2a48-73a6-48b7-94d8-1c24530f4d2b","Type":"ContainerStarted","Data":"63ebe057e4e9114ce7c31db34d9d9fec65c3a33829164d5a3068de5b975ecd60"} Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.365827 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:52 crc kubenswrapper[4829]: I0217 16:22:52.400383 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" podStartSLOduration=3.40035772 podStartE2EDuration="3.40035772s" podCreationTimestamp="2026-02-17 16:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:52.393192377 +0000 UTC m=+1684.810210395" watchObservedRunningTime="2026-02-17 16:22:52.40035772 +0000 UTC m=+1684.817375738" Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.476891 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-hfgfn" Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.566871 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:22:59 crc kubenswrapper[4829]: I0217 16:22:59.568348 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" containerID="cri-o://f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" gracePeriod=10 Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.236451 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.336911 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.336958 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337062 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337080 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337153 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337220 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.337327 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") pod \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\" (UID: \"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99\") " Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.347828 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz" (OuterVolumeSpecName: "kube-api-access-lt6vz") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "kube-api-access-lt6vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.396686 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config" (OuterVolumeSpecName: "config") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.405878 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.415296 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.420876 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.431475 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.432125 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" (UID: "9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441363 4829 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441424 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441440 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt6vz\" (UniqueName: \"kubernetes.io/projected/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-kube-api-access-lt6vz\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441452 4829 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441464 4829 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441492 4829 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.441504 4829 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494339 4829 generic.go:334] "Generic (PLEG): container finished" podID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494388 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.494425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.495620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-scz5z" event={"ID":"9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99","Type":"ContainerDied","Data":"6f78b4c7c77a3fd41331059baa9cf07d6d3476716c1b634ba3b502e421586a77"} Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.495638 4829 scope.go:117] "RemoveContainer" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.533095 4829 scope.go:117] "RemoveContainer" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.541076 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.554423 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-scz5z"] Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.555659 4829 scope.go:117] "RemoveContainer" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: E0217 16:23:00.556152 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": container with ID starting with f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4 not found: ID does not exist" containerID="f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556199 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4"} err="failed to get container status \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": rpc error: code = NotFound desc = could not find container \"f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4\": container with ID starting with f2ab88e408977b2494d11de7eadf619ebcb9888457f1c0e262f1470aeee680d4 not found: ID does not exist" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556296 4829 scope.go:117] "RemoveContainer" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: E0217 16:23:00.556818 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": container with ID starting with 2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9 not found: ID does not exist" containerID="2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9" Feb 17 16:23:00 crc kubenswrapper[4829]: I0217 16:23:00.556862 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9"} err="failed to get container status \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": rpc error: code = NotFound desc = could not find container \"2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9\": container with ID starting with 2f8c3089b760b1edc81ec5465ba4cf693c3723aacd5a1f5bf4793c25e969e5d9 not found: ID does not exist" Feb 17 16:23:02 crc kubenswrapper[4829]: E0217 16:23:02.283258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:02 crc kubenswrapper[4829]: I0217 16:23:02.300920 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" path="/var/lib/kubelet/pods/9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99/volumes" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422066 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422601 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.422853 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:03 crc kubenswrapper[4829]: E0217 16:23:03.424472 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:06 crc kubenswrapper[4829]: I0217 16:23:06.280092 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:06 crc kubenswrapper[4829]: E0217 16:23:06.281113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:11 crc kubenswrapper[4829]: I0217 16:23:11.638924 4829 generic.go:334] "Generic (PLEG): container finished" podID="13860a28-5cd6-4bf9-b60b-3872c76444a8" containerID="d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28" exitCode=0 Feb 17 16:23:11 crc kubenswrapper[4829]: I0217 16:23:11.639154 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerDied","Data":"d457f52bc7d4c0903ea9445db598633b1452c1ea2f3aa11f01ac06c730cb4e28"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.658375 4829 generic.go:334] "Generic (PLEG): container finished" podID="4c6b5337-789c-48a9-b772-3d96b64640e6" containerID="2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855" exitCode=0 Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.658545 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerDied","Data":"2fc4da119a9fe1683bd454529375ea5a04d0dea47f5bdd91e2d2cb0666452855"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.661644 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"13860a28-5cd6-4bf9-b60b-3872c76444a8","Type":"ContainerStarted","Data":"17958486db1f8626286073b7193b9fc9f2a71fed07c7a02278e530d40fb15d7e"} Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.661901 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 17 16:23:12 crc kubenswrapper[4829]: I0217 16:23:12.735315 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=36.73530077 podStartE2EDuration="36.73530077s" podCreationTimestamp="2026-02-17 16:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:12.730493399 +0000 UTC m=+1705.147511387" watchObservedRunningTime="2026-02-17 16:23:12.73530077 +0000 UTC m=+1705.152318738" Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.681154 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4c6b5337-789c-48a9-b772-3d96b64640e6","Type":"ContainerStarted","Data":"0bd32012d7a00b558d50fae45ce486aee73bc59eb9fb23789c1ad852bd5e7305"} Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.681616 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:13 crc kubenswrapper[4829]: I0217 16:23:13.716945 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.716925464 podStartE2EDuration="36.716925464s" podCreationTimestamp="2026-02-17 16:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:13.7079469 +0000 UTC m=+1706.124964908" watchObservedRunningTime="2026-02-17 16:23:13.716925464 +0000 UTC m=+1706.133943442" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.755907 4829 scope.go:117] "RemoveContainer" containerID="60ef148a9d569ecc3b36c99d002422d97d0d77f354ca64920a10679c00f4b801" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.781985 4829 scope.go:117] "RemoveContainer" containerID="49cf6b186c4b1a0047d7ceda695346c714e6db90adc01877e5df1fc27af9a053" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.847161 4829 scope.go:117] "RemoveContainer" containerID="d54a6a2049e7874f777d315503bfb5d47cd59944424b597b3813fb29a67a0531" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.909314 4829 scope.go:117] "RemoveContainer" containerID="4d93de9573607e7eb19f92afc0666fb2923ce4dbcca16c34f41221619cb47b89" Feb 17 16:23:14 crc kubenswrapper[4829]: I0217 16:23:14.971019 4829 scope.go:117] "RemoveContainer" containerID="8a8df6b49cb30bade4727d213073afef4b05bc075b9cbc7ba5af5bade7e92ba3" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.281645 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.366891 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.366949 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.367104 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.368234 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596193 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596673 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596689 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596714 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596722 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596737 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596743 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: E0217 16:23:17.596755 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596760 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="init" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596970 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9656ce3d-4ce5-4e96-8d26-ceb6f4e27a99" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.596996 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fdb8e01-6d92-47be-a6a8-4d2e39d42152" containerName="dnsmasq-dns" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.597806 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602472 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602728 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602884 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.602951 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.622286 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.673903 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674450 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674549 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.674665 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777359 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777454 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777506 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.777620 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.784391 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.784613 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.785431 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.795878 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:17 crc kubenswrapper[4829]: I0217 16:23:17.920683 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:18 crc kubenswrapper[4829]: E0217 16:23:18.294819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:18 crc kubenswrapper[4829]: I0217 16:23:18.611291 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t"] Feb 17 16:23:18 crc kubenswrapper[4829]: I0217 16:23:18.738734 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerStarted","Data":"a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805"} Feb 17 16:23:19 crc kubenswrapper[4829]: I0217 16:23:19.280546 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:19 crc kubenswrapper[4829]: E0217 16:23:19.281117 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:20 crc kubenswrapper[4829]: I0217 16:23:20.358255 4829 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3fdb8e01-6d92-47be-a6a8-4d2e39d42152"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3fdb8e01-6d92-47be-a6a8-4d2e39d42152] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3fdb8e01_6d92_47be_a6a8_4d2e39d42152.slice" Feb 17 16:23:26 crc kubenswrapper[4829]: I0217 16:23:26.833756 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 17 16:23:26 crc kubenswrapper[4829]: I0217 16:23:26.938615 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:27 crc kubenswrapper[4829]: I0217 16:23:27.579831 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:23:29 crc kubenswrapper[4829]: E0217 16:23:29.319745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:29 crc kubenswrapper[4829]: I0217 16:23:29.909369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerStarted","Data":"f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c"} Feb 17 16:23:29 crc kubenswrapper[4829]: I0217 16:23:29.936316 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" podStartSLOduration=2.179034714 podStartE2EDuration="12.936296555s" podCreationTimestamp="2026-02-17 16:23:17 +0000 UTC" firstStartedPulling="2026-02-17 16:23:18.587854274 +0000 UTC m=+1711.004872242" lastFinishedPulling="2026-02-17 16:23:29.345112335 +0000 UTC m=+1721.762134083" observedRunningTime="2026-02-17 16:23:29.933290203 +0000 UTC m=+1722.350308181" watchObservedRunningTime="2026-02-17 16:23:29.936296555 +0000 UTC m=+1722.353314553" Feb 17 16:23:32 crc kubenswrapper[4829]: I0217 16:23:32.182356 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" containerID="cri-o://7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" gracePeriod=604795 Feb 17 16:23:32 crc kubenswrapper[4829]: E0217 16:23:32.282555 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:34 crc kubenswrapper[4829]: I0217 16:23:34.280624 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:34 crc kubenswrapper[4829]: E0217 16:23:34.281079 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:35 crc kubenswrapper[4829]: I0217 16:23:35.227202 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.860476 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.948645 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949007 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949031 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949146 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949204 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949319 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949364 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949387 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949511 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.949551 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.951150 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.951924 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\" (UID: \"328bcfe0-93b6-44bb-83ca-2b3a105f1548\") " Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.952946 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.955758 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.956271 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.956659 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info" (OuterVolumeSpecName: "pod-info") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.968002 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.973047 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.973216 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2" (OuterVolumeSpecName: "kube-api-access-vm5t2") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "kube-api-access-vm5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4829]: I0217 16:23:38.993872 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data" (OuterVolumeSpecName: "config-data") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.019477 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f" (OuterVolumeSpecName: "persistence") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "pvc-84d96401-ecc6-4b20-91e2-fae52f90027f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020476 4829 generic.go:334] "Generic (PLEG): container finished" podID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" exitCode=0 Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020522 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020548 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"328bcfe0-93b6-44bb-83ca-2b3a105f1548","Type":"ContainerDied","Data":"bb8c95494e3f4fa519ef091eaa05fa7291513d824c65555761e45faf40bec928"} Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020550 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.020565 4829 scope.go:117] "RemoveContainer" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.050670 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf" (OuterVolumeSpecName: "server-conf") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054929 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/328bcfe0-93b6-44bb-83ca-2b3a105f1548-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054961 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.054994 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") on node \"crc\" " Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055006 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055014 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/328bcfe0-93b6-44bb-83ca-2b3a105f1548-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055023 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055033 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/328bcfe0-93b6-44bb-83ca-2b3a105f1548-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055041 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5t2\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-kube-api-access-vm5t2\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.055049 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.101184 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.101326 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-84d96401-ecc6-4b20-91e2-fae52f90027f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f") on node "crc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.120280 4829 scope.go:117] "RemoveContainer" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.141046 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "328bcfe0-93b6-44bb-83ca-2b3a105f1548" (UID: "328bcfe0-93b6-44bb-83ca-2b3a105f1548"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142178 4829 scope.go:117] "RemoveContainer" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.142697 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": container with ID starting with 7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc not found: ID does not exist" containerID="7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142730 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc"} err="failed to get container status \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": rpc error: code = NotFound desc = could not find container \"7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc\": container with ID starting with 7064c5c25d4680ab6765509cd53b1de1f264492696babd33ebaf9a777fe0d5bc not found: ID does not exist" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.142750 4829 scope.go:117] "RemoveContainer" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.143050 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": container with ID starting with 42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847 not found: ID does not exist" containerID="42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.143092 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847"} err="failed to get container status \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": rpc error: code = NotFound desc = could not find container \"42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847\": container with ID starting with 42ec937ec7e1b8a85143da99b6832655f5591d2e8236923aaf7f5787f3251847 not found: ID does not exist" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.157427 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.157461 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/328bcfe0-93b6-44bb-83ca-2b3a105f1548-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.359991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.417269 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.429402 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.430041 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="setup-container" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430073 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="setup-container" Feb 17 16:23:39 crc kubenswrapper[4829]: E0217 16:23:39.430083 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430092 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.430401 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" containerName="rabbitmq" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.432119 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.453748 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599031 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599390 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599427 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599566 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599629 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599660 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599676 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599694 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599709 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599744 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.599762 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702246 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702358 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702400 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702418 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702432 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702505 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702522 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702631 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702684 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.702789 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.703512 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.705161 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-config-data\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.705939 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.706505 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.706530 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b279f517412c9d421e4d384ad7a1032e9021db2370e77c854a0ec0125cf75d39/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.707692 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/342647d1-5339-47e5-b35c-80b4406a2ea6-server-conf\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.708777 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.709175 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.709693 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/342647d1-5339-47e5-b35c-80b4406a2ea6-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.710738 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/342647d1-5339-47e5-b35c-80b4406a2ea6-pod-info\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.726846 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67g4b\" (UniqueName: \"kubernetes.io/projected/342647d1-5339-47e5-b35c-80b4406a2ea6-kube-api-access-67g4b\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.759593 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84d96401-ecc6-4b20-91e2-fae52f90027f\") pod \"rabbitmq-server-1\" (UID: \"342647d1-5339-47e5-b35c-80b4406a2ea6\") " pod="openstack/rabbitmq-server-1" Feb 17 16:23:39 crc kubenswrapper[4829]: I0217 16:23:39.872753 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 17 16:23:40 crc kubenswrapper[4829]: I0217 16:23:40.301634 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328bcfe0-93b6-44bb-83ca-2b3a105f1548" path="/var/lib/kubelet/pods/328bcfe0-93b6-44bb-83ca-2b3a105f1548/volumes" Feb 17 16:23:40 crc kubenswrapper[4829]: I0217 16:23:40.389364 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.064232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"b2d0281e8cc1c30da8422e8269380efafdb42c42ab81ddf3b4f0cc192a279788"} Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.067434 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerID="f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c" exitCode=0 Feb 17 16:23:41 crc kubenswrapper[4829]: I0217 16:23:41.067497 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerDied","Data":"f475e165a7fd945db6dbd553e495416ac23eacbfc31b55c14ceba26b5cbdf69c"} Feb 17 16:23:42 crc kubenswrapper[4829]: I0217 16:23:42.848738 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.002731 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.002845 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.003051 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.003202 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") pod \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\" (UID: \"2b2909c1-2feb-4fa2-8a7e-e406334ade24\") " Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.008421 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd" (OuterVolumeSpecName: "kube-api-access-7p2rd") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "kube-api-access-7p2rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.011909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.042559 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.047864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory" (OuterVolumeSpecName: "inventory") pod "2b2909c1-2feb-4fa2-8a7e-e406334ade24" (UID: "2b2909c1-2feb-4fa2-8a7e-e406334ade24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.090446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d"} Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094607 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" event={"ID":"2b2909c1-2feb-4fa2-8a7e-e406334ade24","Type":"ContainerDied","Data":"a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805"} Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094646 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b7c2b2bdbf4133863d60291b884d8a23a79aa90a5e85dfdc39eebab2ad9805" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.094705 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112616 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112690 4829 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112708 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2909c1-2feb-4fa2-8a7e-e406334ade24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.112720 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2rd\" (UniqueName: \"kubernetes.io/projected/2b2909c1-2feb-4fa2-8a7e-e406334ade24-kube-api-access-7p2rd\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.193864 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:43 crc kubenswrapper[4829]: E0217 16:23:43.196139 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.196187 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.196500 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2909c1-2feb-4fa2-8a7e-e406334ade24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.197385 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.210919 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249639 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249807 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.249952 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.250166 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.316940 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.317070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.317105 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419541 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419662 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.419992 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.423981 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.429928 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.450544 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vzzfp\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:43 crc kubenswrapper[4829]: I0217 16:23:43.562776 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:44 crc kubenswrapper[4829]: I0217 16:23:44.185801 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp"] Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400489 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400548 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.400707 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:23:44 crc kubenswrapper[4829]: E0217 16:23:44.401909 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.118748 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerStarted","Data":"1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21"} Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.119094 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerStarted","Data":"de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40"} Feb 17 16:23:45 crc kubenswrapper[4829]: I0217 16:23:45.145373 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" podStartSLOduration=1.748254218 podStartE2EDuration="2.145351473s" podCreationTimestamp="2026-02-17 16:23:43 +0000 UTC" firstStartedPulling="2026-02-17 16:23:44.191146064 +0000 UTC m=+1736.608164062" lastFinishedPulling="2026-02-17 16:23:44.588243329 +0000 UTC m=+1737.005261317" observedRunningTime="2026-02-17 16:23:45.13828782 +0000 UTC m=+1737.555305798" watchObservedRunningTime="2026-02-17 16:23:45.145351473 +0000 UTC m=+1737.562369451" Feb 17 16:23:45 crc kubenswrapper[4829]: E0217 16:23:45.281087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:23:48 crc kubenswrapper[4829]: I0217 16:23:48.171450 4829 generic.go:334] "Generic (PLEG): container finished" podID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerID="1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21" exitCode=0 Feb 17 16:23:48 crc kubenswrapper[4829]: I0217 16:23:48.171502 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerDied","Data":"1f8f075b73821cef74d435f81da52789241f4966fd6d4cf03e9f7cb13539ff21"} Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.280055 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:23:49 crc kubenswrapper[4829]: E0217 16:23:49.281043 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.822737 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.991837 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.991961 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:49 crc kubenswrapper[4829]: I0217 16:23:49.992276 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") pod \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\" (UID: \"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e\") " Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.000063 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr" (OuterVolumeSpecName: "kube-api-access-kc2sr") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "kube-api-access-kc2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.026785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory" (OuterVolumeSpecName: "inventory") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.030214 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" (UID: "fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095259 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2sr\" (UniqueName: \"kubernetes.io/projected/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-kube-api-access-kc2sr\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095293 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.095303 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201369 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" event={"ID":"fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e","Type":"ContainerDied","Data":"de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40"} Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201409 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3969fe2f5e553ddd19a0d5a315095716b24b3b29a4d8ba018c29def2321a40" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.201468 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vzzfp" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.319007 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:50 crc kubenswrapper[4829]: E0217 16:23:50.319819 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.319842 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.320280 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.321673 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.324446 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.324938 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.325297 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.326290 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.333917 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504385 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504666 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.504786 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.607878 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.607975 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.608020 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.608367 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.611686 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.612232 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.614244 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.640886 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:50 crc kubenswrapper[4829]: I0217 16:23:50.652090 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:23:51 crc kubenswrapper[4829]: I0217 16:23:51.268397 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj"] Feb 17 16:23:51 crc kubenswrapper[4829]: W0217 16:23:51.280823 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f00333b_9c18_4a8c_b409_2961da9afccc.slice/crio-78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a WatchSource:0}: Error finding container 78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a: Status 404 returned error can't find the container with id 78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a Feb 17 16:23:52 crc kubenswrapper[4829]: I0217 16:23:52.232455 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerStarted","Data":"78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a"} Feb 17 16:23:54 crc kubenswrapper[4829]: I0217 16:23:54.259156 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerStarted","Data":"dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802"} Feb 17 16:23:54 crc kubenswrapper[4829]: I0217 16:23:54.286420 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" podStartSLOduration=2.562824317 podStartE2EDuration="4.286397819s" podCreationTimestamp="2026-02-17 16:23:50 +0000 UTC" firstStartedPulling="2026-02-17 16:23:51.282990546 +0000 UTC m=+1743.700008524" lastFinishedPulling="2026-02-17 16:23:53.006564038 +0000 UTC m=+1745.423582026" observedRunningTime="2026-02-17 16:23:54.279971595 +0000 UTC m=+1746.696989573" watchObservedRunningTime="2026-02-17 16:23:54.286397819 +0000 UTC m=+1746.703415797" Feb 17 16:23:57 crc kubenswrapper[4829]: E0217 16:23:57.282399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:00 crc kubenswrapper[4829]: I0217 16:24:00.280753 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.281399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415383 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415465 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.415662 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:00 crc kubenswrapper[4829]: E0217 16:24:00.417268 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:10 crc kubenswrapper[4829]: E0217 16:24:10.281215 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:11 crc kubenswrapper[4829]: E0217 16:24:11.282060 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:13 crc kubenswrapper[4829]: I0217 16:24:13.280280 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:13 crc kubenswrapper[4829]: E0217 16:24:13.281171 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.215931 4829 scope.go:117] "RemoveContainer" containerID="7762e87703a1c4136eb3b4174777b162abed1e4bd8d781f944d890ff3fd5cd96" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.239567 4829 scope.go:117] "RemoveContainer" containerID="5b45e379b740973ba122e05427a01186c34a580e09566960544af4dd61aaf736" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.311306 4829 scope.go:117] "RemoveContainer" containerID="ef4d8a2620e4f126f2f3b7d4b615a3f0007223efb883b8eb59462a1965f215c8" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.336863 4829 scope.go:117] "RemoveContainer" containerID="add6f99dd5aa2a876eb7d6f75408368d7dc1149a375b7055a94eb49141a47491" Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.509130 4829 generic.go:334] "Generic (PLEG): container finished" podID="342647d1-5339-47e5-b35c-80b4406a2ea6" containerID="36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d" exitCode=0 Feb 17 16:24:15 crc kubenswrapper[4829]: I0217 16:24:15.509182 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerDied","Data":"36b9687fdab11fb69f7021e53dbf3b14a5d11683bb0ede2af8d65e1ffaffaf6d"} Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.522659 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"342647d1-5339-47e5-b35c-80b4406a2ea6","Type":"ContainerStarted","Data":"abfe536e361127215a0200d70dc18ee7b043da3413cd9902d21e30e5460979b4"} Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.523367 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 17 16:24:16 crc kubenswrapper[4829]: I0217 16:24:16.558564 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.558546437 podStartE2EDuration="37.558546437s" podCreationTimestamp="2026-02-17 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:16.550282352 +0000 UTC m=+1768.967300330" watchObservedRunningTime="2026-02-17 16:24:16.558546437 +0000 UTC m=+1768.975564415" Feb 17 16:24:24 crc kubenswrapper[4829]: I0217 16:24:24.278955 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:24 crc kubenswrapper[4829]: E0217 16:24:24.279664 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:25 crc kubenswrapper[4829]: E0217 16:24:25.281829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:26 crc kubenswrapper[4829]: E0217 16:24:26.282610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:29 crc kubenswrapper[4829]: I0217 16:24:29.875863 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 17 16:24:29 crc kubenswrapper[4829]: I0217 16:24:29.939499 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:34 crc kubenswrapper[4829]: I0217 16:24:34.360941 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" containerID="cri-o://ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" gracePeriod=604796 Feb 17 16:24:35 crc kubenswrapper[4829]: I0217 16:24:35.203212 4829 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 17 16:24:36 crc kubenswrapper[4829]: I0217 16:24:36.280917 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:36 crc kubenswrapper[4829]: E0217 16:24:36.281840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:36 crc kubenswrapper[4829]: E0217 16:24:36.282852 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:41 crc kubenswrapper[4829]: E0217 16:24:41.299821 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.578468 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764553 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764681 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.764811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.768928 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.768983 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769027 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769141 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769218 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769259 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769317 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.769342 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770128 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770526 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.770658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771305 4829 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771327 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.771340 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.772030 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8" (OuterVolumeSpecName: "kube-api-access-lz7m8") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "kube-api-access-lz7m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.772180 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.781346 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info" (OuterVolumeSpecName: "pod-info") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.785414 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.798354 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9" (OuterVolumeSpecName: "persistence") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.822942 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data" (OuterVolumeSpecName: "config-data") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.857026 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf" (OuterVolumeSpecName: "server-conf") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865434 4829 generic.go:334] "Generic (PLEG): container finished" podID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" exitCode=0 Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865481 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865509 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee690a85-cf83-4e55-a69d-ca6bd136bf07","Type":"ContainerDied","Data":"a60aada70c3f593a74b4071c2abcb6f9c3fd33978cc728f03766c68f321305cc"} Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.865528 4829 scope.go:117] "RemoveContainer" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.867459 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874637 4829 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee690a85-cf83-4e55-a69d-ca6bd136bf07-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874691 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874701 4829 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee690a85-cf83-4e55-a69d-ca6bd136bf07-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874710 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874725 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7m8\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-kube-api-access-lz7m8\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874738 4829 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee690a85-cf83-4e55-a69d-ca6bd136bf07-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.874780 4829 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") on node \"crc\" " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.920883 4829 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.921210 4829 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9") on node "crc" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.975772 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.976432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") pod \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\" (UID: \"ee690a85-cf83-4e55-a69d-ca6bd136bf07\") " Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.977657 4829 reconciler_common.go:293] "Volume detached for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:41 crc kubenswrapper[4829]: W0217 16:24:41.977800 4829 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ee690a85-cf83-4e55-a69d-ca6bd136bf07/volumes/kubernetes.io~projected/rabbitmq-confd Feb 17 16:24:41 crc kubenswrapper[4829]: I0217 16:24:41.977880 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ee690a85-cf83-4e55-a69d-ca6bd136bf07" (UID: "ee690a85-cf83-4e55-a69d-ca6bd136bf07"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.079356 4829 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee690a85-cf83-4e55-a69d-ca6bd136bf07-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.085735 4829 scope.go:117] "RemoveContainer" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.116233 4829 scope.go:117] "RemoveContainer" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.117236 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": container with ID starting with ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319 not found: ID does not exist" containerID="ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117269 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319"} err="failed to get container status \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": rpc error: code = NotFound desc = could not find container \"ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319\": container with ID starting with ffe5d3f103305b16d8ed85e37f44da078b58d0cc00dc8625d299161a0bfc6319 not found: ID does not exist" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117293 4829 scope.go:117] "RemoveContainer" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.117628 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": container with ID starting with 86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a not found: ID does not exist" containerID="86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.117769 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a"} err="failed to get container status \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": rpc error: code = NotFound desc = could not find container \"86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a\": container with ID starting with 86e75ef2ac528560ffb3920829feb44d8527363e68b90ba8dcb2df132fdfd85a not found: ID does not exist" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.224591 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.239687 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.326976 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" path="/var/lib/kubelet/pods/ee690a85-cf83-4e55-a69d-ca6bd136bf07/volumes" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.327638 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.328003 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="setup-container" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328014 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="setup-container" Feb 17 16:24:42 crc kubenswrapper[4829]: E0217 16:24:42.328046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328052 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.328252 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee690a85-cf83-4e55-a69d-ca6bd136bf07" containerName="rabbitmq" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.329623 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.329699 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.501870 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.501977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502246 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502301 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502338 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502595 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502700 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502849 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.502883 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605352 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605460 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605568 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605732 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605769 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.605840 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606003 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606024 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606307 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606333 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606361 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.606308 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607039 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-config-data\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607177 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.607591 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/feaa3649-f3db-44ac-8054-cd13296c0845-server-conf\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.609283 4829 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.609335 4829 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f2fb41440360b87637c863c905d7642fdbb5fac4b43922d0db49761300e3e982/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611055 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/feaa3649-f3db-44ac-8054-cd13296c0845-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611167 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.611726 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/feaa3649-f3db-44ac-8054-cd13296c0845-pod-info\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.613096 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.636048 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dvw7\" (UniqueName: \"kubernetes.io/projected/feaa3649-f3db-44ac-8054-cd13296c0845-kube-api-access-4dvw7\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.713406 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a07e4b32-476b-47fe-b1c5-4bd7b109bad9\") pod \"rabbitmq-server-0\" (UID: \"feaa3649-f3db-44ac-8054-cd13296c0845\") " pod="openstack/rabbitmq-server-0" Feb 17 16:24:42 crc kubenswrapper[4829]: I0217 16:24:42.955891 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:24:43 crc kubenswrapper[4829]: I0217 16:24:43.521203 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:24:43 crc kubenswrapper[4829]: I0217 16:24:43.891171 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"8cbb4822f62f78253042dcb81e07985af5147d86b60f491f906f8010915fbb28"} Feb 17 16:24:46 crc kubenswrapper[4829]: I0217 16:24:46.938249 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0"} Feb 17 16:24:47 crc kubenswrapper[4829]: I0217 16:24:47.279732 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:24:47 crc kubenswrapper[4829]: E0217 16:24:47.280388 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:24:50 crc kubenswrapper[4829]: E0217 16:24:50.281982 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:24:56 crc kubenswrapper[4829]: E0217 16:24:56.282644 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:02 crc kubenswrapper[4829]: I0217 16:25:02.280391 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:02 crc kubenswrapper[4829]: E0217 16:25:02.281471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:25:03 crc kubenswrapper[4829]: E0217 16:25:03.282373 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:10 crc kubenswrapper[4829]: E0217 16:25:10.284927 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:17 crc kubenswrapper[4829]: I0217 16:25:17.280166 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:17 crc kubenswrapper[4829]: E0217 16:25:17.283264 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425130 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425406 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.425512 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:25:18 crc kubenswrapper[4829]: E0217 16:25:18.426670 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:19 crc kubenswrapper[4829]: I0217 16:25:19.430036 4829 generic.go:334] "Generic (PLEG): container finished" podID="feaa3649-f3db-44ac-8054-cd13296c0845" containerID="e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0" exitCode=0 Feb 17 16:25:19 crc kubenswrapper[4829]: I0217 16:25:19.430193 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerDied","Data":"e9839933075dec79e891b6caec6bd93a6665e93e943c11063a9778f18acd6bb0"} Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.445471 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"feaa3649-f3db-44ac-8054-cd13296c0845","Type":"ContainerStarted","Data":"3ad375d29c751ca67e9ead9056f161b8c22463b18f6e4a157e0f7a0a8768addb"} Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.446082 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:25:20 crc kubenswrapper[4829]: I0217 16:25:20.478516 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.478494203 podStartE2EDuration="38.478494203s" podCreationTimestamp="2026-02-17 16:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:20.471992836 +0000 UTC m=+1832.889010854" watchObservedRunningTime="2026-02-17 16:25:20.478494203 +0000 UTC m=+1832.895512181" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.408525 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.409081 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.409286 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:25:24 crc kubenswrapper[4829]: E0217 16:25:24.410554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.506530 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.511349 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.520676 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.598753 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.599079 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.599187 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701398 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701471 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.701560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.702251 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.702288 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.724409 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"certified-operators-pvqbf\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:29 crc kubenswrapper[4829]: I0217 16:25:29.849478 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:30 crc kubenswrapper[4829]: E0217 16:25:30.280983 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:30 crc kubenswrapper[4829]: I0217 16:25:30.353914 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:30 crc kubenswrapper[4829]: I0217 16:25:30.570240 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"787ba6ce4d84d1c2d3fed84fb2ed9b68fbb7b8f0c893e7970515e42d02dec566"} Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.306281 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.585643 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" exitCode=0 Feb 17 16:25:31 crc kubenswrapper[4829]: I0217 16:25:31.585682 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.597548 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.600363 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} Feb 17 16:25:32 crc kubenswrapper[4829]: I0217 16:25:32.960163 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:25:34 crc kubenswrapper[4829]: I0217 16:25:34.626351 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" exitCode=0 Feb 17 16:25:34 crc kubenswrapper[4829]: I0217 16:25:34.626491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} Feb 17 16:25:35 crc kubenswrapper[4829]: I0217 16:25:35.643972 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerStarted","Data":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} Feb 17 16:25:35 crc kubenswrapper[4829]: I0217 16:25:35.672058 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvqbf" podStartSLOduration=3.1027734750000002 podStartE2EDuration="6.672036977s" podCreationTimestamp="2026-02-17 16:25:29 +0000 UTC" firstStartedPulling="2026-02-17 16:25:31.588012975 +0000 UTC m=+1844.005030953" lastFinishedPulling="2026-02-17 16:25:35.157276477 +0000 UTC m=+1847.574294455" observedRunningTime="2026-02-17 16:25:35.661518681 +0000 UTC m=+1848.078536679" watchObservedRunningTime="2026-02-17 16:25:35.672036977 +0000 UTC m=+1848.089054965" Feb 17 16:25:38 crc kubenswrapper[4829]: E0217 16:25:38.296318 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.849755 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.850101 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:39 crc kubenswrapper[4829]: I0217 16:25:39.912554 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:40 crc kubenswrapper[4829]: I0217 16:25:40.790462 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:40 crc kubenswrapper[4829]: I0217 16:25:40.848522 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:42 crc kubenswrapper[4829]: I0217 16:25:42.741136 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvqbf" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" containerID="cri-o://0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" gracePeriod=2 Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.280727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.354514 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.491854 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.492132 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.492164 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") pod \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\" (UID: \"f33a93a0-671d-4454-a62b-9d8f6e0b9f73\") " Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.493062 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities" (OuterVolumeSpecName: "utilities") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.499217 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p" (OuterVolumeSpecName: "kube-api-access-dqk7p") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "kube-api-access-dqk7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.542357 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f33a93a0-671d-4454-a62b-9d8f6e0b9f73" (UID: "f33a93a0-671d-4454-a62b-9d8f6e0b9f73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595172 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595198 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.595208 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqk7p\" (UniqueName: \"kubernetes.io/projected/f33a93a0-671d-4454-a62b-9d8f6e0b9f73-kube-api-access-dqk7p\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758209 4829 generic.go:334] "Generic (PLEG): container finished" podID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" exitCode=0 Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758278 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758319 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvqbf" event={"ID":"f33a93a0-671d-4454-a62b-9d8f6e0b9f73","Type":"ContainerDied","Data":"787ba6ce4d84d1c2d3fed84fb2ed9b68fbb7b8f0c893e7970515e42d02dec566"} Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.758348 4829 scope.go:117] "RemoveContainer" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.760739 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvqbf" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.809438 4829 scope.go:117] "RemoveContainer" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.822636 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.837912 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvqbf"] Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.845528 4829 scope.go:117] "RemoveContainer" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.908161 4829 scope.go:117] "RemoveContainer" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.908996 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": container with ID starting with 0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4 not found: ID does not exist" containerID="0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909040 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4"} err="failed to get container status \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": rpc error: code = NotFound desc = could not find container \"0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4\": container with ID starting with 0876d49477ff13e852d52539db8dd2f14ac791962a25e3a89e19d13411884ad4 not found: ID does not exist" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909088 4829 scope.go:117] "RemoveContainer" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.909462 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": container with ID starting with 9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307 not found: ID does not exist" containerID="9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909508 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307"} err="failed to get container status \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": rpc error: code = NotFound desc = could not find container \"9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307\": container with ID starting with 9aa7e3bc65b1ee1502dd3e2daaf0f5259eabc8fc1d82bb40b76e2678e58f3307 not found: ID does not exist" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.909540 4829 scope.go:117] "RemoveContainer" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: E0217 16:25:43.909980 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": container with ID starting with c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27 not found: ID does not exist" containerID="c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27" Feb 17 16:25:43 crc kubenswrapper[4829]: I0217 16:25:43.910022 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27"} err="failed to get container status \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": rpc error: code = NotFound desc = could not find container \"c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27\": container with ID starting with c67aaaeccf0a5d70023c7c89744b00785845bfbb83bbe505264af5416482bf27 not found: ID does not exist" Feb 17 16:25:44 crc kubenswrapper[4829]: I0217 16:25:44.295254 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" path="/var/lib/kubelet/pods/f33a93a0-671d-4454-a62b-9d8f6e0b9f73/volumes" Feb 17 16:25:52 crc kubenswrapper[4829]: E0217 16:25:52.281810 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:25:58 crc kubenswrapper[4829]: E0217 16:25:58.288770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:07 crc kubenswrapper[4829]: E0217 16:26:07.283156 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:11 crc kubenswrapper[4829]: E0217 16:26:11.282314 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:15 crc kubenswrapper[4829]: I0217 16:26:15.542879 4829 scope.go:117] "RemoveContainer" containerID="916147e2370ae60f186efa9e80afd991d753bbf564e29b51b6534b8ab40c0404" Feb 17 16:26:15 crc kubenswrapper[4829]: I0217 16:26:15.574692 4829 scope.go:117] "RemoveContainer" containerID="09ad5b10424e8b5b328f0a86728cd3939f7463a5f50a783ad37495c769ed00ec" Feb 17 16:26:20 crc kubenswrapper[4829]: E0217 16:26:20.282944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:25 crc kubenswrapper[4829]: E0217 16:26:25.283513 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.067347 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.078321 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.087762 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8f32-account-create-update-gv4hc"] Feb 17 16:26:33 crc kubenswrapper[4829]: I0217 16:26:33.105750 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l4jl2"] Feb 17 16:26:33 crc kubenswrapper[4829]: E0217 16:26:33.282988 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:34 crc kubenswrapper[4829]: I0217 16:26:34.295335 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c18e73-013c-4a4d-a4cc-922f43fccf45" path="/var/lib/kubelet/pods/91c18e73-013c-4a4d-a4cc-922f43fccf45/volumes" Feb 17 16:26:34 crc kubenswrapper[4829]: I0217 16:26:34.297031 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa06d20-74dd-41b6-822b-485fdf6cc6d5" path="/var/lib/kubelet/pods/aaa06d20-74dd-41b6-822b-485fdf6cc6d5/volumes" Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.034557 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.051260 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.064797 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ltmz7"] Feb 17 16:26:35 crc kubenswrapper[4829]: I0217 16:26:35.074667 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vnwrj"] Feb 17 16:26:36 crc kubenswrapper[4829]: I0217 16:26:36.293167 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d" path="/var/lib/kubelet/pods/3b0ce9ad-f2d0-4d3c-abab-0cda2df6b41d/volumes" Feb 17 16:26:36 crc kubenswrapper[4829]: I0217 16:26:36.294296 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef" path="/var/lib/kubelet/pods/9bd8ae3f-8cc5-4d55-87d6-6cf9f8dbfaef/volumes" Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.032789 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.045218 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f99f-account-create-update-7rvdj"] Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.449419 4829 generic.go:334] "Generic (PLEG): container finished" podID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerID="dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802" exitCode=0 Feb 17 16:26:37 crc kubenswrapper[4829]: I0217 16:26:37.449479 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerDied","Data":"dba4246e4627de322b6cbadf9f10ef3d802b3cfeed33a3fdac4043cbd4f79802"} Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.033716 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.045698 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c7bc-account-create-update-zd552"] Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.294242 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406819b6-b859-4d4d-93ee-43180f5981bf" path="/var/lib/kubelet/pods/406819b6-b859-4d4d-93ee-43180f5981bf/volumes" Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.295434 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea266eaa-6bce-499f-9891-ca9ec670e465" path="/var/lib/kubelet/pods/ea266eaa-6bce-499f-9891-ca9ec670e465/volumes" Feb 17 16:26:38 crc kubenswrapper[4829]: I0217 16:26:38.911321 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.088811 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.088932 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.089032 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.089137 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") pod \"9f00333b-9c18-4a8c-b409-2961da9afccc\" (UID: \"9f00333b-9c18-4a8c-b409-2961da9afccc\") " Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.095004 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j" (OuterVolumeSpecName: "kube-api-access-8hf5j") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "kube-api-access-8hf5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.098300 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.121986 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.127895 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory" (OuterVolumeSpecName: "inventory") pod "9f00333b-9c18-4a8c-b409-2961da9afccc" (UID: "9f00333b-9c18-4a8c-b409-2961da9afccc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.192957 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hf5j\" (UniqueName: \"kubernetes.io/projected/9f00333b-9c18-4a8c-b409-2961da9afccc-kube-api-access-8hf5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193282 4829 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193292 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.193303 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f00333b-9c18-4a8c-b409-2961da9afccc-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.281384 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478283 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" event={"ID":"9f00333b-9c18-4a8c-b409-2961da9afccc","Type":"ContainerDied","Data":"78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a"} Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478332 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e4f8ed007bcea44428c7be3a24e00c50f7b3ed38273b7dccedfd238162547a" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.478395 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576160 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.576899 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576935 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.576959 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-utilities" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.576969 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-utilities" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.577014 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577023 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: E0217 16:26:39.577044 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-content" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="extract-content" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577372 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f00333b-9c18-4a8c-b409-2961da9afccc" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.577395 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33a93a0-671d-4454-a62b-9d8f6e0b9f73" containerName="registry-server" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.578723 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582163 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582278 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.582489 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.583050 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.598116 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603371 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603495 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.603653 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706263 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706368 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.706819 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.711057 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.720718 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.730090 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:39 crc kubenswrapper[4829]: I0217 16:26:39.902846 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.054660 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.073104 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-tdv6p"] Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.294028 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03006c3-35b5-45e5-9b9f-578a8eabbf22" path="/var/lib/kubelet/pods/e03006c3-35b5-45e5-9b9f-578a8eabbf22/volumes" Feb 17 16:26:40 crc kubenswrapper[4829]: I0217 16:26:40.491721 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.046620 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.062519 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-bf88-account-create-update-tfddd"] Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.505903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerStarted","Data":"e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf"} Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.505953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerStarted","Data":"4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313"} Feb 17 16:26:41 crc kubenswrapper[4829]: I0217 16:26:41.537866 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" podStartSLOduration=2.119700583 podStartE2EDuration="2.537844065s" podCreationTimestamp="2026-02-17 16:26:39 +0000 UTC" firstStartedPulling="2026-02-17 16:26:40.500720819 +0000 UTC m=+1912.917738797" lastFinishedPulling="2026-02-17 16:26:40.918864301 +0000 UTC m=+1913.335882279" observedRunningTime="2026-02-17 16:26:41.525749507 +0000 UTC m=+1913.942767495" watchObservedRunningTime="2026-02-17 16:26:41.537844065 +0000 UTC m=+1913.954862053" Feb 17 16:26:42 crc kubenswrapper[4829]: I0217 16:26:42.296917 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50b4954-d1c6-451e-b8f4-3ba817c89c6b" path="/var/lib/kubelet/pods/e50b4954-d1c6-451e-b8f4-3ba817c89c6b/volumes" Feb 17 16:26:44 crc kubenswrapper[4829]: E0217 16:26:44.286521 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:50 crc kubenswrapper[4829]: E0217 16:26:50.283205 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.037682 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.048151 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.058282 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-5498-account-create-update-qsrnr"] Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.070369 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-qg7tn"] Feb 17 16:26:58 crc kubenswrapper[4829]: E0217 16:26:58.289819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.302599 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c492d16-f301-449b-a877-a15a17739865" path="/var/lib/kubelet/pods/5c492d16-f301-449b-a877-a15a17739865/volumes" Feb 17 16:26:58 crc kubenswrapper[4829]: I0217 16:26:58.303901 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e81e7f-9610-493c-bdb8-6a7de58b94bf" path="/var/lib/kubelet/pods/f2e81e7f-9610-493c-bdb8-6a7de58b94bf/volumes" Feb 17 16:27:01 crc kubenswrapper[4829]: I0217 16:27:01.050696 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:27:01 crc kubenswrapper[4829]: I0217 16:27:01.064550 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btrfb"] Feb 17 16:27:02 crc kubenswrapper[4829]: E0217 16:27:02.283716 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:02 crc kubenswrapper[4829]: I0217 16:27:02.303530 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df678697-9139-4571-9d3b-9c51ec34df7c" path="/var/lib/kubelet/pods/df678697-9139-4571-9d3b-9c51ec34df7c/volumes" Feb 17 16:27:08 crc kubenswrapper[4829]: I0217 16:27:08.515871 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6bd8598c46-74wvs" podUID="90b368e2-73a9-4594-8428-e17a7bb1e499" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:27:09 crc kubenswrapper[4829]: I0217 16:27:09.038476 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:27:09 crc kubenswrapper[4829]: I0217 16:27:09.048227 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9z4lf"] Feb 17 16:27:10 crc kubenswrapper[4829]: E0217 16:27:10.284760 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:10 crc kubenswrapper[4829]: I0217 16:27:10.298965 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14bea24-3170-4bdb-8811-9a94d94ae4b7" path="/var/lib/kubelet/pods/e14bea24-3170-4bdb-8811-9a94d94ae4b7/volumes" Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.062122 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.072394 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sgsbf"] Feb 17 16:27:12 crc kubenswrapper[4829]: I0217 16:27:12.292165 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043875d4-c1c8-4363-95ca-a7ad4a1d7ae4" path="/var/lib/kubelet/pods/043875d4-c1c8-4363-95ca-a7ad4a1d7ae4/volumes" Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.047717 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.062289 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.079398 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tfzp7"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.099318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wlnfn"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.110310 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.119406 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.128167 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.136816 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2cec-account-create-update-hfc78"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.145566 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0525-account-create-update-t6qsf"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.154008 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d7b6-account-create-update-n4xbx"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.162764 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.171402 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-0c9f-account-create-update-htzx9"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.181727 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:27:13 crc kubenswrapper[4829]: I0217 16:27:13.190948 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-gvpcv"] Feb 17 16:27:13 crc kubenswrapper[4829]: E0217 16:27:13.282375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.301280 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45907bce-01ca-47e8-bfef-12ae037bb254" path="/var/lib/kubelet/pods/45907bce-01ca-47e8-bfef-12ae037bb254/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.302934 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb73f59-cddf-4630-b754-264ec2ccee1e" path="/var/lib/kubelet/pods/5fb73f59-cddf-4630-b754-264ec2ccee1e/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.304236 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64394b7b-175f-4429-b284-783394b5362b" path="/var/lib/kubelet/pods/64394b7b-175f-4429-b284-783394b5362b/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.305443 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ad18d3-95f7-43e4-b906-65466cf9b14f" path="/var/lib/kubelet/pods/84ad18d3-95f7-43e4-b906-65466cf9b14f/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.307708 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="964c7b6b-c551-489a-9a5b-7fbe31c855b2" path="/var/lib/kubelet/pods/964c7b6b-c551-489a-9a5b-7fbe31c855b2/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.309881 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1857247-1b55-4f04-91b5-2725347ddd5e" path="/var/lib/kubelet/pods/a1857247-1b55-4f04-91b5-2725347ddd5e/volumes" Feb 17 16:27:14 crc kubenswrapper[4829]: I0217 16:27:14.310696 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7208dff-6f9e-410a-9b88-e6def8b38478" path="/var/lib/kubelet/pods/f7208dff-6f9e-410a-9b88-e6def8b38478/volumes" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.697247 4829 scope.go:117] "RemoveContainer" containerID="20b680a069f205c7254600a2dc48f2dacbee35886c3daf160c27ebefa332adfa" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.859927 4829 scope.go:117] "RemoveContainer" containerID="0bcb4f250e213804507ed493214ba7bf617f7f2f71800c17fbdff667468ccdaa" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.884556 4829 scope.go:117] "RemoveContainer" containerID="a8d5e938c03955318069a91689bb204bf27fd21a056ffa247054c274b646d733" Feb 17 16:27:15 crc kubenswrapper[4829]: I0217 16:27:15.953980 4829 scope.go:117] "RemoveContainer" containerID="4ba65477b876815a4af6a839fd23fbb043f8161fda6b1b9302f717d3bb40593d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.003533 4829 scope.go:117] "RemoveContainer" containerID="42892c9ff9e32a928e6e83b4efcbb8f60153f54eaa6ceb08fd7677183a549354" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.072560 4829 scope.go:117] "RemoveContainer" containerID="2db5e51be688f04135c16e3c3049c787d4188d6cca9615ea116295016f098a49" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.113148 4829 scope.go:117] "RemoveContainer" containerID="50816bbb33b5760c561f5a9b97cac3b08bc50b9fb27103dbccc5b35ba91f0d4d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.166175 4829 scope.go:117] "RemoveContainer" containerID="17c8100257ab6b556a498c4d304d5d6a56b063a8426f2656c39153f279b0d376" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.192456 4829 scope.go:117] "RemoveContainer" containerID="97c3d2066942ae5c865fce9d2f6158019f5e32e98988925aa95f76d7c042502f" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.223693 4829 scope.go:117] "RemoveContainer" containerID="e2e2b01d50a28aea9a4bdad84d2df7114b9e2d0c992f03355a3a939f0f4f0a79" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.250974 4829 scope.go:117] "RemoveContainer" containerID="2038fa35b09b9bbb81ec5afb753cf5b4293c16655d2ce98f8b33bdf9fc5ce5f0" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.283412 4829 scope.go:117] "RemoveContainer" containerID="459372b3f348ab7761a62b42e441f7a1ba76d111957340bf1dd535ab70f99945" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.328902 4829 scope.go:117] "RemoveContainer" containerID="717b27e5148f6eca4fe5434026e28771bb05f6785cb6ac5ed8c38cae82f30794" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.358868 4829 scope.go:117] "RemoveContainer" containerID="78179064b35e621b70da85e2f996d1c7f6636f395c1f7c08c6cda280cdbb8859" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.384404 4829 scope.go:117] "RemoveContainer" containerID="61a08cff2799109fdb7564a62bae4bd95492daf6611205fb5161091b218cd366" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.406796 4829 scope.go:117] "RemoveContainer" containerID="718ef8fa4b8c68244f19858a3acee9a29306f7958d3d08c1a8fe252589c457d1" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.430706 4829 scope.go:117] "RemoveContainer" containerID="1fe924cb8c093940e73402f84ac57352d9b776e550a42b2ef428c0a0f172493f" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.451969 4829 scope.go:117] "RemoveContainer" containerID="50a2604e4d6a7b2b1f806638f635ccd419fb9c70a1a17c0c06d4d5ba8ee01b26" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.470707 4829 scope.go:117] "RemoveContainer" containerID="17ab28ac0a5478f4563437c84c9df18e102e0c18d1f959410f323210c8c6af28" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.491788 4829 scope.go:117] "RemoveContainer" containerID="414323f952f1105e1e74c01059eb3f452e41a714ed9d19fd07bb964fdccb5204" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.510799 4829 scope.go:117] "RemoveContainer" containerID="e3fb41ef07db1f8e839c100410b2932c9041d772dbc365e213f544f3ecd58024" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.532701 4829 scope.go:117] "RemoveContainer" containerID="6d27c7207f6b3c9339d15c106190c1638d48becd22f0af8b39c3bb3b5418259d" Feb 17 16:27:16 crc kubenswrapper[4829]: I0217 16:27:16.551402 4829 scope.go:117] "RemoveContainer" containerID="e1df0e9635d5b24c64905f9caa82b8aa4d7b94aeead334b1bf450f67b01ebc0c" Feb 17 16:27:21 crc kubenswrapper[4829]: E0217 16:27:21.283080 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:28 crc kubenswrapper[4829]: E0217 16:27:28.291891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:29 crc kubenswrapper[4829]: I0217 16:27:29.040730 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:27:29 crc kubenswrapper[4829]: I0217 16:27:29.052529 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cs5v7"] Feb 17 16:27:30 crc kubenswrapper[4829]: I0217 16:27:30.293679 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd83d7c-5347-49c7-a979-d63e812d294c" path="/var/lib/kubelet/pods/3fd83d7c-5347-49c7-a979-d63e812d294c/volumes" Feb 17 16:27:36 crc kubenswrapper[4829]: E0217 16:27:36.287302 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:39 crc kubenswrapper[4829]: E0217 16:27:39.280682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:49 crc kubenswrapper[4829]: E0217 16:27:49.282979 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:27:51 crc kubenswrapper[4829]: E0217 16:27:51.282441 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:27:52 crc kubenswrapper[4829]: I0217 16:27:52.424535 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:52 crc kubenswrapper[4829]: I0217 16:27:52.424903 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.418407 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.419204 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.419530 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:02 crc kubenswrapper[4829]: E0217 16:28:02.421610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:04 crc kubenswrapper[4829]: E0217 16:28:04.284778 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:13 crc kubenswrapper[4829]: I0217 16:28:13.047875 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:28:13 crc kubenswrapper[4829]: I0217 16:28:13.058844 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jrh5n"] Feb 17 16:28:13 crc kubenswrapper[4829]: E0217 16:28:13.282704 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.040298 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.053175 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8s649"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.066185 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.077349 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tpsml"] Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.294088 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff4740d-5b36-4273-be02-50bec771e157" path="/var/lib/kubelet/pods/8ff4740d-5b36-4273-be02-50bec771e157/volumes" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.294865 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acebba68-0142-4d4e-be34-e31a6ccb8722" path="/var/lib/kubelet/pods/acebba68-0142-4d4e-be34-e31a6ccb8722/volumes" Feb 17 16:28:14 crc kubenswrapper[4829]: I0217 16:28:14.295597 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8202be9-bbed-45eb-80af-de3018eb6ce2" path="/var/lib/kubelet/pods/f8202be9-bbed-45eb-80af-de3018eb6ce2/volumes" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.415495 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.415915 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.416048 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:15 crc kubenswrapper[4829]: E0217 16:28:15.417345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.017608 4829 scope.go:117] "RemoveContainer" containerID="0cead0a3673c2aefb220fc0cc37916427fe9ba7b2f3f6514935233caf777c237" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.057875 4829 scope.go:117] "RemoveContainer" containerID="0abca13517080b826127382c61dcfd8ef64b2ed21a762bebb1b7b97d2e2f51e2" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.108655 4829 scope.go:117] "RemoveContainer" containerID="1a9eb4c01a9b5e23509c667ea792cf2ec4eabf591fe87b248ce8b1bd176e7115" Feb 17 16:28:17 crc kubenswrapper[4829]: I0217 16:28:17.152961 4829 scope.go:117] "RemoveContainer" containerID="3335350dd5e48d31f13599da8da9b10d7cf6e7d9242917e0fccf8b3a5f429fd6" Feb 17 16:28:22 crc kubenswrapper[4829]: I0217 16:28:22.424924 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:22 crc kubenswrapper[4829]: I0217 16:28:22.425684 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.045182 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.068175 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xh926"] Feb 17 16:28:24 crc kubenswrapper[4829]: I0217 16:28:24.306171 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e" path="/var/lib/kubelet/pods/7972c4f2-e3c0-4677-9dea-b65c5ff8cc2e/volumes" Feb 17 16:28:28 crc kubenswrapper[4829]: E0217 16:28:28.293425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.054791 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.067318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n46p8"] Feb 17 16:28:30 crc kubenswrapper[4829]: E0217 16:28:30.282956 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:30 crc kubenswrapper[4829]: I0217 16:28:30.295138 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d9b56f-3f6b-4fb6-af65-8f2410f60e20" path="/var/lib/kubelet/pods/f3d9b56f-3f6b-4fb6-af65-8f2410f60e20/volumes" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.673234 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.680996 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.688165 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.761649 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.761925 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.762088 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863707 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863767 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.863872 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.864359 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.864398 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:37 crc kubenswrapper[4829]: I0217 16:28:37.885723 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"redhat-marketplace-c9vfs\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:38 crc kubenswrapper[4829]: I0217 16:28:38.019918 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:38 crc kubenswrapper[4829]: I0217 16:28:38.603328 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.044980 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" exitCode=0 Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.045054 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c"} Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.045277 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"5985bccf682a6daeb0c3e4594a3b5375cfeaccfafb2b267d869bbdd615d32ed6"} Feb 17 16:28:39 crc kubenswrapper[4829]: I0217 16:28:39.048402 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:28:40 crc kubenswrapper[4829]: E0217 16:28:40.281787 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:41 crc kubenswrapper[4829]: I0217 16:28:41.066993 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} Feb 17 16:28:42 crc kubenswrapper[4829]: I0217 16:28:42.077192 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" exitCode=0 Feb 17 16:28:42 crc kubenswrapper[4829]: I0217 16:28:42.077297 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} Feb 17 16:28:43 crc kubenswrapper[4829]: I0217 16:28:43.091411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerStarted","Data":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} Feb 17 16:28:43 crc kubenswrapper[4829]: I0217 16:28:43.116823 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c9vfs" podStartSLOduration=2.577167532 podStartE2EDuration="6.116805144s" podCreationTimestamp="2026-02-17 16:28:37 +0000 UTC" firstStartedPulling="2026-02-17 16:28:39.048195151 +0000 UTC m=+2031.465213129" lastFinishedPulling="2026-02-17 16:28:42.587832753 +0000 UTC m=+2035.004850741" observedRunningTime="2026-02-17 16:28:43.10738498 +0000 UTC m=+2035.524402968" watchObservedRunningTime="2026-02-17 16:28:43.116805144 +0000 UTC m=+2035.533823132" Feb 17 16:28:43 crc kubenswrapper[4829]: E0217 16:28:43.280312 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:48 crc kubenswrapper[4829]: I0217 16:28:48.020434 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:48 crc kubenswrapper[4829]: I0217 16:28:48.020974 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:49 crc kubenswrapper[4829]: I0217 16:28:49.102410 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c9vfs" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:49 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:49 crc kubenswrapper[4829]: > Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425144 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425649 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.425698 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.426760 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:28:52 crc kubenswrapper[4829]: I0217 16:28:52.426838 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" gracePeriod=600 Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.214735 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" exitCode=0 Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.214776 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c"} Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.215343 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} Feb 17 16:28:53 crc kubenswrapper[4829]: I0217 16:28:53.215363 4829 scope.go:117] "RemoveContainer" containerID="e8dda8a767184206339feba88d195523a1818749936a5034223426abebfeeaab" Feb 17 16:28:55 crc kubenswrapper[4829]: E0217 16:28:55.283330 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.082151 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.156741 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:28:58 crc kubenswrapper[4829]: E0217 16:28:58.299991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:28:58 crc kubenswrapper[4829]: I0217 16:28:58.329343 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:28:59 crc kubenswrapper[4829]: I0217 16:28:59.325158 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c9vfs" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" containerID="cri-o://25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" gracePeriod=2 Feb 17 16:28:59 crc kubenswrapper[4829]: I0217 16:28:59.949540 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.105885 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106082 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106157 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") pod \"62a49506-a612-4019-b32c-9e14503fda42\" (UID: \"62a49506-a612-4019-b32c-9e14503fda42\") " Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.106949 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities" (OuterVolumeSpecName: "utilities") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.108935 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.112544 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv" (OuterVolumeSpecName: "kube-api-access-zl5dv") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "kube-api-access-zl5dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.152034 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62a49506-a612-4019-b32c-9e14503fda42" (UID: "62a49506-a612-4019-b32c-9e14503fda42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.211861 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a49506-a612-4019-b32c-9e14503fda42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.211930 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5dv\" (UniqueName: \"kubernetes.io/projected/62a49506-a612-4019-b32c-9e14503fda42-kube-api-access-zl5dv\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344050 4829 generic.go:334] "Generic (PLEG): container finished" podID="62a49506-a612-4019-b32c-9e14503fda42" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" exitCode=0 Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c9vfs" event={"ID":"62a49506-a612-4019-b32c-9e14503fda42","Type":"ContainerDied","Data":"5985bccf682a6daeb0c3e4594a3b5375cfeaccfafb2b267d869bbdd615d32ed6"} Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344177 4829 scope.go:117] "RemoveContainer" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.344502 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c9vfs" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.377183 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.383859 4829 scope.go:117] "RemoveContainer" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.393097 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c9vfs"] Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.410998 4829 scope.go:117] "RemoveContainer" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.461930 4829 scope.go:117] "RemoveContainer" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.462562 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": container with ID starting with 25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f not found: ID does not exist" containerID="25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.462617 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f"} err="failed to get container status \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": rpc error: code = NotFound desc = could not find container \"25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f\": container with ID starting with 25ce2f4d1610c58c4e5b238c646bd64b6653fa68c9784be373a952ee249b226f not found: ID does not exist" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.462645 4829 scope.go:117] "RemoveContainer" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.462988 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": container with ID starting with d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f not found: ID does not exist" containerID="d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463063 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f"} err="failed to get container status \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": rpc error: code = NotFound desc = could not find container \"d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f\": container with ID starting with d0690a3657734a174c05f70abd8410234e39046337f2a376521ff4cba58c609f not found: ID does not exist" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463095 4829 scope.go:117] "RemoveContainer" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: E0217 16:29:00.463457 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": container with ID starting with f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c not found: ID does not exist" containerID="f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c" Feb 17 16:29:00 crc kubenswrapper[4829]: I0217 16:29:00.463512 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c"} err="failed to get container status \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": rpc error: code = NotFound desc = could not find container \"f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c\": container with ID starting with f3592a02ca2f2bd3a8e7260254500ad8906e0c92c2e7bb59432914986d892a3c not found: ID does not exist" Feb 17 16:29:02 crc kubenswrapper[4829]: I0217 16:29:02.300979 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a49506-a612-4019-b32c-9e14503fda42" path="/var/lib/kubelet/pods/62a49506-a612-4019-b32c-9e14503fda42/volumes" Feb 17 16:29:07 crc kubenswrapper[4829]: E0217 16:29:07.282525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:09 crc kubenswrapper[4829]: E0217 16:29:09.281285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:17 crc kubenswrapper[4829]: I0217 16:29:17.310965 4829 scope.go:117] "RemoveContainer" containerID="b093852d9a8ecee7168718bdf187b05c01b5cd20bbf9cd75f443d7a248f6fcbc" Feb 17 16:29:17 crc kubenswrapper[4829]: I0217 16:29:17.349233 4829 scope.go:117] "RemoveContainer" containerID="e3214a1c9770cfbd196a4b73cb48788f0c3797eb0a755f5a161531de4c9a93e6" Feb 17 16:29:19 crc kubenswrapper[4829]: I0217 16:29:19.055700 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:29:19 crc kubenswrapper[4829]: I0217 16:29:19.071782 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-535d-account-create-update-fmkp6"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.039637 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.052365 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.064952 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rzxtw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.077937 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.090007 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.099300 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cglz5"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.127807 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3357-account-create-update-rg852"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.147081 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.160326 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6c18-account-create-update-wl9ps"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.171441 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cnfbw"] Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.291541 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250927ce-8b7a-4c30-a13d-fd1cd34ee7cd" path="/var/lib/kubelet/pods/250927ce-8b7a-4c30-a13d-fd1cd34ee7cd/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.292168 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef7195e-f16e-4c5e-a84c-69c571ec7bb5" path="/var/lib/kubelet/pods/4ef7195e-f16e-4c5e-a84c-69c571ec7bb5/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.292737 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544f59e2-daea-45db-99b4-d9714f620a74" path="/var/lib/kubelet/pods/544f59e2-daea-45db-99b4-d9714f620a74/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.293283 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a9c261-a9c4-49c8-bec3-891a68d897b6" path="/var/lib/kubelet/pods/c8a9c261-a9c4-49c8-bec3-891a68d897b6/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.294480 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c909da16-2d5d-4706-adb8-f8402ed9f01e" path="/var/lib/kubelet/pods/c909da16-2d5d-4706-adb8-f8402ed9f01e/volumes" Feb 17 16:29:20 crc kubenswrapper[4829]: I0217 16:29:20.295163 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdf2448-5ccb-4351-b022-de49263fd521" path="/var/lib/kubelet/pods/dcdf2448-5ccb-4351-b022-de49263fd521/volumes" Feb 17 16:29:21 crc kubenswrapper[4829]: E0217 16:29:21.283333 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:21 crc kubenswrapper[4829]: E0217 16:29:21.283348 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:32 crc kubenswrapper[4829]: E0217 16:29:32.283840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:33 crc kubenswrapper[4829]: E0217 16:29:33.281474 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:44 crc kubenswrapper[4829]: E0217 16:29:44.283009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:29:46 crc kubenswrapper[4829]: E0217 16:29:46.282389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.528785 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530323 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-utilities" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530349 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-utilities" Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530459 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530475 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: E0217 16:29:56.530527 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-content" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.530540 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="extract-content" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.531040 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a49506-a612-4019-b32c-9e14503fda42" containerName="registry-server" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.534059 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.550995 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.637120 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.638015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.638535 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741439 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.741656 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.742773 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.742834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.774653 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"community-operators-wqzdk\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:56 crc kubenswrapper[4829]: I0217 16:29:56.859563 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.069186 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.092342 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f9vr7"] Feb 17 16:29:57 crc kubenswrapper[4829]: I0217 16:29:57.491451 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101644 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" exitCode=0 Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101742 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166"} Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.101880 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"883efdb41339a304017f80a94e30713ad2829f6a86d10e2c04b2e00ce0d33fd2"} Feb 17 16:29:58 crc kubenswrapper[4829]: I0217 16:29:58.294465 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d00488-ed97-4f10-bf11-7c57e5a4d631" path="/var/lib/kubelet/pods/70d00488-ed97-4f10-bf11-7c57e5a4d631/volumes" Feb 17 16:29:58 crc kubenswrapper[4829]: E0217 16:29:58.295385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:29:59 crc kubenswrapper[4829]: E0217 16:29:59.284452 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.128325 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.185023 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.187094 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.190231 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.190710 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.209082 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.333383 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.334230 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.334729 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.437952 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.438151 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.439764 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.439812 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.452779 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.456492 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"collect-profiles-29522430-gmcbj\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:00 crc kubenswrapper[4829]: I0217 16:30:00.516463 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:01 crc kubenswrapper[4829]: I0217 16:30:01.027530 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 16:30:01 crc kubenswrapper[4829]: W0217 16:30:01.031705 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3000c07b_e126_4f72_9667_251ca9a53989.slice/crio-9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f WatchSource:0}: Error finding container 9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f: Status 404 returned error can't find the container with id 9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f Feb 17 16:30:01 crc kubenswrapper[4829]: I0217 16:30:01.147722 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerStarted","Data":"9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f"} Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.159887 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.159992 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.164984 4829 generic.go:334] "Generic (PLEG): container finished" podID="3000c07b-e126-4f72-9667-251ca9a53989" containerID="95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4829]: I0217 16:30:02.165062 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerDied","Data":"95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f"} Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.192228 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerStarted","Data":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.274179 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wqzdk" podStartSLOduration=2.569755908 podStartE2EDuration="7.274163375s" podCreationTimestamp="2026-02-17 16:29:56 +0000 UTC" firstStartedPulling="2026-02-17 16:29:58.105087169 +0000 UTC m=+2110.522105157" lastFinishedPulling="2026-02-17 16:30:02.809494646 +0000 UTC m=+2115.226512624" observedRunningTime="2026-02-17 16:30:03.236861025 +0000 UTC m=+2115.653879023" watchObservedRunningTime="2026-02-17 16:30:03.274163375 +0000 UTC m=+2115.691181353" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.720247 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865128 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865235 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.865405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") pod \"3000c07b-e126-4f72-9667-251ca9a53989\" (UID: \"3000c07b-e126-4f72-9667-251ca9a53989\") " Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.866554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume" (OuterVolumeSpecName: "config-volume") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.871983 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv" (OuterVolumeSpecName: "kube-api-access-q7vlv") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "kube-api-access-q7vlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.873374 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3000c07b-e126-4f72-9667-251ca9a53989" (UID: "3000c07b-e126-4f72-9667-251ca9a53989"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968420 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3000c07b-e126-4f72-9667-251ca9a53989-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968752 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vlv\" (UniqueName: \"kubernetes.io/projected/3000c07b-e126-4f72-9667-251ca9a53989-kube-api-access-q7vlv\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:03 crc kubenswrapper[4829]: I0217 16:30:03.968961 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3000c07b-e126-4f72-9667-251ca9a53989-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.204961 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" event={"ID":"3000c07b-e126-4f72-9667-251ca9a53989","Type":"ContainerDied","Data":"9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f"} Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.205001 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a75618fcdc31d15847a6a94cc06c9b77ae31ef2d2d7eb11843e69ba3a9a852f" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.205015 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.831449 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.842889 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-m5vfb"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.880655 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:04 crc kubenswrapper[4829]: E0217 16:30:04.881281 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.881299 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.881671 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3000c07b-e126-4f72-9667-251ca9a53989" containerName="collect-profiles" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.883797 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.898683 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992468 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992707 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:04 crc kubenswrapper[4829]: I0217 16:30:04.992808 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.094943 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095049 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095164 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095655 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.095743 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.118186 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"redhat-operators-vg97x\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.201153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:05 crc kubenswrapper[4829]: I0217 16:30:05.735872 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227452 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" exitCode=0 Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227531 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7"} Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.227837 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"820a1f3e598ecbaf9ce9d8dae39e9dfee320e0cb9b10ed62084cb316ab3f70a1"} Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.294765 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f" path="/var/lib/kubelet/pods/0f5812bc-a81d-439d-bcc8-f7c9ceb3ab3f/volumes" Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.860306 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:06 crc kubenswrapper[4829]: I0217 16:30:06.860618 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:07 crc kubenswrapper[4829]: I0217 16:30:07.239446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} Feb 17 16:30:07 crc kubenswrapper[4829]: I0217 16:30:07.917776 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" probeResult="failure" output=< Feb 17 16:30:07 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:30:07 crc kubenswrapper[4829]: > Feb 17 16:30:10 crc kubenswrapper[4829]: E0217 16:30:10.285186 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:11 crc kubenswrapper[4829]: E0217 16:30:11.284262 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:15 crc kubenswrapper[4829]: I0217 16:30:15.329858 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" exitCode=0 Feb 17 16:30:15 crc kubenswrapper[4829]: I0217 16:30:15.330014 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.376177 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerStarted","Data":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.421620 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vg97x" podStartSLOduration=3.394077978 podStartE2EDuration="13.421597s" podCreationTimestamp="2026-02-17 16:30:04 +0000 UTC" firstStartedPulling="2026-02-17 16:30:06.229512914 +0000 UTC m=+2118.646530892" lastFinishedPulling="2026-02-17 16:30:16.257031936 +0000 UTC m=+2128.674049914" observedRunningTime="2026-02-17 16:30:17.398202887 +0000 UTC m=+2129.815220895" watchObservedRunningTime="2026-02-17 16:30:17.421597 +0000 UTC m=+2129.838615028" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.503431 4829 scope.go:117] "RemoveContainer" containerID="19fa382ac3b1e0dcea6e14bae3060b3ca4a7305dd0b13f45e47ac7484bc20b72" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.530205 4829 scope.go:117] "RemoveContainer" containerID="56fde6f5f968f9b21fa818f6dedc25d815abdb89bcc948291a025b6a2be61029" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.604026 4829 scope.go:117] "RemoveContainer" containerID="7356895af139c1fc573f4130992ef04eb6043436a2149c71d1018146e64edc38" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.650383 4829 scope.go:117] "RemoveContainer" containerID="a78a56e406bc916bcbee0b61aee0a17f7c85f30cb263aca766cd95de859cf5df" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.705019 4829 scope.go:117] "RemoveContainer" containerID="eb95c3235b74ba31c9536f8cb2e0b952c10ba58622f5ea207881e8c088f79896" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.762943 4829 scope.go:117] "RemoveContainer" containerID="163b33d479072091becac60ae3ca4b30fcbdb2bc215e7a08f12e2f27e7c28349" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.821624 4829 scope.go:117] "RemoveContainer" containerID="a5a92e580b15008e7371df2210593a390d4fa1829b92198b0d613a7dfb894bd2" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.844839 4829 scope.go:117] "RemoveContainer" containerID="18024f11e62d3137756adc99055ab77a5a3685cd7f06ad50d401a907e401589f" Feb 17 16:30:17 crc kubenswrapper[4829]: I0217 16:30:17.914469 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" probeResult="failure" output=< Feb 17 16:30:17 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:30:17 crc kubenswrapper[4829]: > Feb 17 16:30:21 crc kubenswrapper[4829]: I0217 16:30:21.040217 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:30:21 crc kubenswrapper[4829]: I0217 16:30:21.051360 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l7ns"] Feb 17 16:30:22 crc kubenswrapper[4829]: E0217 16:30:22.282054 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:22 crc kubenswrapper[4829]: I0217 16:30:22.293645 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef56b6a-4a1c-4305-a88d-3654df130c52" path="/var/lib/kubelet/pods/bef56b6a-4a1c-4305-a88d-3654df130c52/volumes" Feb 17 16:30:23 crc kubenswrapper[4829]: E0217 16:30:23.280745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.031947 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.068076 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.077374 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-cbfe-account-create-update-bfbsk"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.086961 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-zxj99"] Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.202117 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.202177 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.254081 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.501378 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:25 crc kubenswrapper[4829]: I0217 16:30:25.564660 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.041866 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.056403 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbhtp"] Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.296735 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17cc49ce-4e47-470a-ad6b-a4127308a7e4" path="/var/lib/kubelet/pods/17cc49ce-4e47-470a-ad6b-a4127308a7e4/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.298535 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264a77a9-afad-42ac-ac8f-7d705e242db5" path="/var/lib/kubelet/pods/264a77a9-afad-42ac-ac8f-7d705e242db5/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.300212 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fcc02f-9122-4ea6-bb0e-ef135805c127" path="/var/lib/kubelet/pods/38fcc02f-9122-4ea6-bb0e-ef135805c127/volumes" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.924622 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:26 crc kubenswrapper[4829]: I0217 16:30:26.988797 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:27 crc kubenswrapper[4829]: I0217 16:30:27.473463 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vg97x" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" containerID="cri-o://5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" gracePeriod=2 Feb 17 16:30:27 crc kubenswrapper[4829]: I0217 16:30:27.889799 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.312659 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.363851 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.364172 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.364420 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") pod \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\" (UID: \"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f\") " Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.365498 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities" (OuterVolumeSpecName: "utilities") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.365775 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.383771 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x" (OuterVolumeSpecName: "kube-api-access-lnj7x") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "kube-api-access-lnj7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.469312 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnj7x\" (UniqueName: \"kubernetes.io/projected/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-kube-api-access-lnj7x\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.502535 4829 generic.go:334] "Generic (PLEG): container finished" podID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" exitCode=0 Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.502975 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wqzdk" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" containerID="cri-o://30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" gracePeriod=2 Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503853 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg97x" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503875 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503924 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg97x" event={"ID":"7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f","Type":"ContainerDied","Data":"820a1f3e598ecbaf9ce9d8dae39e9dfee320e0cb9b10ed62084cb316ab3f70a1"} Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.503952 4829 scope.go:117] "RemoveContainer" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.552564 4829 scope.go:117] "RemoveContainer" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.556882 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" (UID: "7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.573121 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.629050 4829 scope.go:117] "RemoveContainer" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.752473 4829 scope.go:117] "RemoveContainer" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.752976 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": container with ID starting with 5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f not found: ID does not exist" containerID="5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753018 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f"} err="failed to get container status \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": rpc error: code = NotFound desc = could not find container \"5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f\": container with ID starting with 5a8296cfb2cea2d71ae2ebc85dcf363c87f4ada01860bcfeeb96d6501766493f not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753090 4829 scope.go:117] "RemoveContainer" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.753379 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": container with ID starting with 515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883 not found: ID does not exist" containerID="515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753410 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883"} err="failed to get container status \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": rpc error: code = NotFound desc = could not find container \"515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883\": container with ID starting with 515dada20c1739eaf103384f8335ed00786fbcd207b71e4743a45670bcc5f883 not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753428 4829 scope.go:117] "RemoveContainer" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: E0217 16:30:28.753678 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": container with ID starting with 3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7 not found: ID does not exist" containerID="3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.753710 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7"} err="failed to get container status \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": rpc error: code = NotFound desc = could not find container \"3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7\": container with ID starting with 3f31b026935a8f57393c6dd0a4e7404062a25843fbd8ef4caf22464c8d6e91d7 not found: ID does not exist" Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.853395 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:28 crc kubenswrapper[4829]: I0217 16:30:28.864091 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vg97x"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.057367 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094518 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094662 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.094774 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") pod \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\" (UID: \"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a\") " Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.095401 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities" (OuterVolumeSpecName: "utilities") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.095554 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.101684 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r" (OuterVolumeSpecName: "kube-api-access-jwc8r") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "kube-api-access-jwc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.168638 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" (UID: "ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.199505 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwc8r\" (UniqueName: \"kubernetes.io/projected/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-kube-api-access-jwc8r\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.199542 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521217 4829 generic.go:334] "Generic (PLEG): container finished" podID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" exitCode=0 Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521270 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521301 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wqzdk" event={"ID":"ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a","Type":"ContainerDied","Data":"883efdb41339a304017f80a94e30713ad2829f6a86d10e2c04b2e00ce0d33fd2"} Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521323 4829 scope.go:117] "RemoveContainer" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.521479 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wqzdk" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.567782 4829 scope.go:117] "RemoveContainer" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.587000 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.599113 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wqzdk"] Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.599322 4829 scope.go:117] "RemoveContainer" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.632795 4829 scope.go:117] "RemoveContainer" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.633304 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": container with ID starting with 30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33 not found: ID does not exist" containerID="30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633359 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33"} err="failed to get container status \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": rpc error: code = NotFound desc = could not find container \"30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33\": container with ID starting with 30cb1b4fcee6c04373aaa49f8b4a24c196882afab69b384e6573c2ab30edae33 not found: ID does not exist" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633394 4829 scope.go:117] "RemoveContainer" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.633870 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": container with ID starting with 5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5 not found: ID does not exist" containerID="5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633899 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5"} err="failed to get container status \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": rpc error: code = NotFound desc = could not find container \"5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5\": container with ID starting with 5c7b60b2e990f33f10601e8f0852f8797293f2c9029b37dafb6e25a2093d59b5 not found: ID does not exist" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.633919 4829 scope.go:117] "RemoveContainer" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: E0217 16:30:29.634198 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": container with ID starting with 96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166 not found: ID does not exist" containerID="96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166" Feb 17 16:30:29 crc kubenswrapper[4829]: I0217 16:30:29.634224 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166"} err="failed to get container status \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": rpc error: code = NotFound desc = could not find container \"96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166\": container with ID starting with 96d6ccc606d0614328422ca018ede8a3a8a1e7bad309e33fec9a349f81bea166 not found: ID does not exist" Feb 17 16:30:30 crc kubenswrapper[4829]: I0217 16:30:30.297785 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" path="/var/lib/kubelet/pods/7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f/volumes" Feb 17 16:30:30 crc kubenswrapper[4829]: I0217 16:30:30.298853 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" path="/var/lib/kubelet/pods/ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a/volumes" Feb 17 16:30:35 crc kubenswrapper[4829]: E0217 16:30:35.283170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:36 crc kubenswrapper[4829]: E0217 16:30:36.282154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:49 crc kubenswrapper[4829]: E0217 16:30:49.285416 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:30:50 crc kubenswrapper[4829]: E0217 16:30:50.282017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:30:52 crc kubenswrapper[4829]: I0217 16:30:52.424458 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:52 crc kubenswrapper[4829]: I0217 16:30:52.424543 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:03 crc kubenswrapper[4829]: E0217 16:31:03.281834 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:03 crc kubenswrapper[4829]: E0217 16:31:03.281858 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.066325 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.082958 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8dvtl"] Feb 17 16:31:06 crc kubenswrapper[4829]: I0217 16:31:06.302975 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85602fcf-2cee-4c92-8270-623eb79c4baa" path="/var/lib/kubelet/pods/85602fcf-2cee-4c92-8270-623eb79c4baa/volumes" Feb 17 16:31:15 crc kubenswrapper[4829]: E0217 16:31:15.282299 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.021177 4829 scope.go:117] "RemoveContainer" containerID="ba9e6984f6e1375c614ba050673fa1c59a99225935f95385a58551377a0b527d" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.047844 4829 scope.go:117] "RemoveContainer" containerID="035b701778f945716aea71c2327b0e25ac26fff01d700f58e0f7b88f78589b83" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.117561 4829 scope.go:117] "RemoveContainer" containerID="1f98050660b9d45e573f04e86af725a0d2cd93ef0bfb1c053d9999f606e6cb5e" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.189853 4829 scope.go:117] "RemoveContainer" containerID="162abbe87e18a223ced95f748a19c935456faeb9630e09ad92b99fa391ba7ef4" Feb 17 16:31:18 crc kubenswrapper[4829]: I0217 16:31:18.256845 4829 scope.go:117] "RemoveContainer" containerID="c01d42cd58dd29c26f6c33d07e27c4650b58bbd03ecd8f0edcae652a5edac447" Feb 17 16:31:18 crc kubenswrapper[4829]: E0217 16:31:18.299870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:22 crc kubenswrapper[4829]: I0217 16:31:22.424737 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:22 crc kubenswrapper[4829]: I0217 16:31:22.425389 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:27 crc kubenswrapper[4829]: E0217 16:31:27.281233 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:33 crc kubenswrapper[4829]: E0217 16:31:33.282061 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:41 crc kubenswrapper[4829]: E0217 16:31:41.283293 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:44 crc kubenswrapper[4829]: E0217 16:31:44.281096 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424136 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424598 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.424643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.425459 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:31:52 crc kubenswrapper[4829]: I0217 16:31:52.425501 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" gracePeriod=600 Feb 17 16:31:52 crc kubenswrapper[4829]: E0217 16:31:52.563123 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.514055 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" exitCode=0 Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.514120 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d"} Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.515244 4829 scope.go:117] "RemoveContainer" containerID="c88219688c0e40e9f9dda08fe38e3aeb3786fdf3a1c910e981d872f2aca60a0c" Feb 17 16:31:53 crc kubenswrapper[4829]: I0217 16:31:53.516042 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:31:53 crc kubenswrapper[4829]: E0217 16:31:53.516527 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:31:54 crc kubenswrapper[4829]: E0217 16:31:54.281378 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:31:56 crc kubenswrapper[4829]: E0217 16:31:56.280723 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:06 crc kubenswrapper[4829]: I0217 16:32:06.280601 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:06 crc kubenswrapper[4829]: E0217 16:32:06.281423 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:08 crc kubenswrapper[4829]: E0217 16:32:08.295814 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:09 crc kubenswrapper[4829]: E0217 16:32:09.281473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:20 crc kubenswrapper[4829]: E0217 16:32:20.283211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:21 crc kubenswrapper[4829]: I0217 16:32:21.279687 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:21 crc kubenswrapper[4829]: E0217 16:32:21.280392 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:21 crc kubenswrapper[4829]: E0217 16:32:21.281542 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:32 crc kubenswrapper[4829]: I0217 16:32:32.281186 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:32 crc kubenswrapper[4829]: E0217 16:32:32.282291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:34 crc kubenswrapper[4829]: E0217 16:32:34.284613 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:35 crc kubenswrapper[4829]: E0217 16:32:35.281078 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:44 crc kubenswrapper[4829]: I0217 16:32:44.279700 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:44 crc kubenswrapper[4829]: E0217 16:32:44.280424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:32:46 crc kubenswrapper[4829]: E0217 16:32:46.281942 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:49 crc kubenswrapper[4829]: E0217 16:32:49.281908 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:32:57 crc kubenswrapper[4829]: E0217 16:32:57.281708 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:32:58 crc kubenswrapper[4829]: I0217 16:32:58.286935 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:32:58 crc kubenswrapper[4829]: E0217 16:32:58.287543 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:01 crc kubenswrapper[4829]: E0217 16:33:01.280863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.416261 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.416905 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.417073 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:33:08 crc kubenswrapper[4829]: E0217 16:33:08.418233 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:11 crc kubenswrapper[4829]: I0217 16:33:11.280430 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:11 crc kubenswrapper[4829]: E0217 16:33:11.281271 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:13 crc kubenswrapper[4829]: E0217 16:33:13.281619 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:23 crc kubenswrapper[4829]: I0217 16:33:23.280017 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:23 crc kubenswrapper[4829]: E0217 16:33:23.280638 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:23 crc kubenswrapper[4829]: E0217 16:33:23.282525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.408969 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.409981 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.410210 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:33:27 crc kubenswrapper[4829]: E0217 16:33:27.411442 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:35 crc kubenswrapper[4829]: I0217 16:33:35.280414 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:35 crc kubenswrapper[4829]: E0217 16:33:35.281309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:36 crc kubenswrapper[4829]: I0217 16:33:36.607691 4829 generic.go:334] "Generic (PLEG): container finished" podID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerID="e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf" exitCode=2 Feb 17 16:33:36 crc kubenswrapper[4829]: I0217 16:33:36.610654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerDied","Data":"e9cce6c88e1946da2f3186ce5d703a9c8fb3764ba59607c3d4380a8117eaddcf"} Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.214846 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:33:38 crc kubenswrapper[4829]: E0217 16:33:38.308255 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.328555 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.329463 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.329612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") pod \"60a577ad-f610-459b-9f2d-19c6bc6f356a\" (UID: \"60a577ad-f610-459b-9f2d-19c6bc6f356a\") " Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.347494 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt" (OuterVolumeSpecName: "kube-api-access-gwzvt") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "kube-api-access-gwzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.365703 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.376741 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory" (OuterVolumeSpecName: "inventory") pod "60a577ad-f610-459b-9f2d-19c6bc6f356a" (UID: "60a577ad-f610-459b-9f2d-19c6bc6f356a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438726 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438768 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwzvt\" (UniqueName: \"kubernetes.io/projected/60a577ad-f610-459b-9f2d-19c6bc6f356a-kube-api-access-gwzvt\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.438784 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60a577ad-f610-459b-9f2d-19c6bc6f356a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634296 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" event={"ID":"60a577ad-f610-459b-9f2d-19c6bc6f356a","Type":"ContainerDied","Data":"4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313"} Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634354 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccd8d3f03a2911239e775b57bc0852e556ee989179f4f1c8ee8402e41cf4313" Feb 17 16:33:38 crc kubenswrapper[4829]: I0217 16:33:38.634358 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q" Feb 17 16:33:39 crc kubenswrapper[4829]: E0217 16:33:39.284187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.035781 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036887 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036904 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036923 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036931 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036948 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036956 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.036966 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.036974 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037004 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037012 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037046 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037053 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-content" Feb 17 16:33:46 crc kubenswrapper[4829]: E0217 16:33:46.037076 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037084 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="extract-utilities" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037369 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5b5aa1-d2c0-4ec3-8bf1-53ef9fa1bf9f" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037389 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a577ad-f610-459b-9f2d-19c6bc6f356a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.037404 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7494a0-5e6c-4a5d-b060-0e2eb1bb386a" containerName="registry-server" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.038494 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.044271 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.044473 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.045475 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.045672 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.065369 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.165936 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.166604 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.166659 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268803 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.268896 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.276013 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.276163 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.296762 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp7df\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.404805 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.965107 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df"] Feb 17 16:33:46 crc kubenswrapper[4829]: I0217 16:33:46.971984 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:33:47 crc kubenswrapper[4829]: I0217 16:33:47.720551 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerStarted","Data":"5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099"} Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.299637 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:33:48 crc kubenswrapper[4829]: E0217 16:33:48.300194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.731950 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerStarted","Data":"17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d"} Feb 17 16:33:48 crc kubenswrapper[4829]: I0217 16:33:48.752768 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" podStartSLOduration=2.200639235 podStartE2EDuration="2.752749053s" podCreationTimestamp="2026-02-17 16:33:46 +0000 UTC" firstStartedPulling="2026-02-17 16:33:46.971808652 +0000 UTC m=+2339.388826630" lastFinishedPulling="2026-02-17 16:33:47.52391847 +0000 UTC m=+2339.940936448" observedRunningTime="2026-02-17 16:33:48.747293696 +0000 UTC m=+2341.164311674" watchObservedRunningTime="2026-02-17 16:33:48.752749053 +0000 UTC m=+2341.169767031" Feb 17 16:33:49 crc kubenswrapper[4829]: E0217 16:33:49.282438 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:33:53 crc kubenswrapper[4829]: E0217 16:33:53.281401 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:01 crc kubenswrapper[4829]: I0217 16:34:01.280796 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:01 crc kubenswrapper[4829]: E0217 16:34:01.282016 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:03 crc kubenswrapper[4829]: E0217 16:34:03.281757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:06 crc kubenswrapper[4829]: E0217 16:34:06.282108 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:15 crc kubenswrapper[4829]: I0217 16:34:15.279981 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:15 crc kubenswrapper[4829]: E0217 16:34:15.280982 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:15 crc kubenswrapper[4829]: E0217 16:34:15.283528 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:19 crc kubenswrapper[4829]: E0217 16:34:19.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:28 crc kubenswrapper[4829]: E0217 16:34:28.297868 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:29 crc kubenswrapper[4829]: I0217 16:34:29.279867 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:29 crc kubenswrapper[4829]: E0217 16:34:29.280383 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:32 crc kubenswrapper[4829]: E0217 16:34:32.284993 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:41 crc kubenswrapper[4829]: E0217 16:34:41.281379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:43 crc kubenswrapper[4829]: I0217 16:34:43.279526 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:43 crc kubenswrapper[4829]: E0217 16:34:43.280111 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:44 crc kubenswrapper[4829]: E0217 16:34:44.282017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:34:53 crc kubenswrapper[4829]: E0217 16:34:53.282375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:34:56 crc kubenswrapper[4829]: I0217 16:34:56.282110 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:34:56 crc kubenswrapper[4829]: E0217 16:34:56.283808 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:34:56 crc kubenswrapper[4829]: E0217 16:34:56.287143 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:07 crc kubenswrapper[4829]: E0217 16:35:07.282495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:08 crc kubenswrapper[4829]: I0217 16:35:08.293980 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:08 crc kubenswrapper[4829]: E0217 16:35:08.295017 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:08 crc kubenswrapper[4829]: E0217 16:35:08.298683 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:19 crc kubenswrapper[4829]: E0217 16:35:19.283727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:19 crc kubenswrapper[4829]: E0217 16:35:19.284377 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:22 crc kubenswrapper[4829]: I0217 16:35:22.279774 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:22 crc kubenswrapper[4829]: E0217 16:35:22.280629 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:30 crc kubenswrapper[4829]: E0217 16:35:30.282756 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:32 crc kubenswrapper[4829]: E0217 16:35:32.283950 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:33 crc kubenswrapper[4829]: I0217 16:35:33.279345 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:33 crc kubenswrapper[4829]: E0217 16:35:33.279711 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:44 crc kubenswrapper[4829]: I0217 16:35:44.279817 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:44 crc kubenswrapper[4829]: E0217 16:35:44.280802 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:44 crc kubenswrapper[4829]: E0217 16:35:44.282193 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:35:46 crc kubenswrapper[4829]: E0217 16:35:46.284871 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:57 crc kubenswrapper[4829]: I0217 16:35:57.281286 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:35:57 crc kubenswrapper[4829]: E0217 16:35:57.282636 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:35:57 crc kubenswrapper[4829]: E0217 16:35:57.284262 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:35:59 crc kubenswrapper[4829]: E0217 16:35:59.284488 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:09 crc kubenswrapper[4829]: I0217 16:36:09.279750 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:09 crc kubenswrapper[4829]: E0217 16:36:09.280440 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:11 crc kubenswrapper[4829]: E0217 16:36:11.286030 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:12 crc kubenswrapper[4829]: E0217 16:36:12.288480 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.600880 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.603775 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.634650 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.751998 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.752062 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.752086 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854525 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854621 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854650 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.854964 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.855017 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.875346 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"certified-operators-hgcfb\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:17 crc kubenswrapper[4829]: I0217 16:36:17.924618 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:18 crc kubenswrapper[4829]: I0217 16:36:18.482994 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:18 crc kubenswrapper[4829]: I0217 16:36:18.585062 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"7a1f6e48924ecff9268477f3c718e0a5dbc385e04f2e313cd9042e7148b74cc2"} Feb 17 16:36:19 crc kubenswrapper[4829]: I0217 16:36:19.598814 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" exitCode=0 Feb 17 16:36:19 crc kubenswrapper[4829]: I0217 16:36:19.599042 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689"} Feb 17 16:36:21 crc kubenswrapper[4829]: I0217 16:36:21.634737 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} Feb 17 16:36:22 crc kubenswrapper[4829]: I0217 16:36:22.280844 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:22 crc kubenswrapper[4829]: E0217 16:36:22.281554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:22 crc kubenswrapper[4829]: E0217 16:36:22.284068 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:23 crc kubenswrapper[4829]: E0217 16:36:23.538232 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ab4c2e_a614_46f8_b7fc_259bacfeb8b4.slice/crio-dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:36:23 crc kubenswrapper[4829]: I0217 16:36:23.658397 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" exitCode=0 Feb 17 16:36:23 crc kubenswrapper[4829]: I0217 16:36:23.658442 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} Feb 17 16:36:24 crc kubenswrapper[4829]: I0217 16:36:24.671821 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerStarted","Data":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} Feb 17 16:36:24 crc kubenswrapper[4829]: I0217 16:36:24.705022 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgcfb" podStartSLOduration=3.145316321 podStartE2EDuration="7.704986867s" podCreationTimestamp="2026-02-17 16:36:17 +0000 UTC" firstStartedPulling="2026-02-17 16:36:19.601672734 +0000 UTC m=+2492.018690702" lastFinishedPulling="2026-02-17 16:36:24.16134327 +0000 UTC m=+2496.578361248" observedRunningTime="2026-02-17 16:36:24.692454298 +0000 UTC m=+2497.109472276" watchObservedRunningTime="2026-02-17 16:36:24.704986867 +0000 UTC m=+2497.122004855" Feb 17 16:36:26 crc kubenswrapper[4829]: E0217 16:36:26.282135 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.925339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.925823 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:27 crc kubenswrapper[4829]: I0217 16:36:27.988343 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:33 crc kubenswrapper[4829]: E0217 16:36:33.282930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:34 crc kubenswrapper[4829]: I0217 16:36:34.279630 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:34 crc kubenswrapper[4829]: E0217 16:36:34.279870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:37 crc kubenswrapper[4829]: I0217 16:36:37.997384 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:38 crc kubenswrapper[4829]: I0217 16:36:38.057370 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:38 crc kubenswrapper[4829]: I0217 16:36:38.822235 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgcfb" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" containerID="cri-o://c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" gracePeriod=2 Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.281746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.479065 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561078 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561199 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.561546 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") pod \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\" (UID: \"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4\") " Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.564184 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities" (OuterVolumeSpecName: "utilities") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.576326 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs" (OuterVolumeSpecName: "kube-api-access-9nlqs") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "kube-api-access-9nlqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.611792 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" (UID: "21ab4c2e-a614-46f8-b7fc-259bacfeb8b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664401 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664436 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlqs\" (UniqueName: \"kubernetes.io/projected/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-kube-api-access-9nlqs\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.664447 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836221 4829 generic.go:334] "Generic (PLEG): container finished" podID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" exitCode=0 Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836268 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836286 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgcfb" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836304 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgcfb" event={"ID":"21ab4c2e-a614-46f8-b7fc-259bacfeb8b4","Type":"ContainerDied","Data":"7a1f6e48924ecff9268477f3c718e0a5dbc385e04f2e313cd9042e7148b74cc2"} Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.836337 4829 scope.go:117] "RemoveContainer" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.871632 4829 scope.go:117] "RemoveContainer" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.890837 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.907140 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgcfb"] Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.919931 4829 scope.go:117] "RemoveContainer" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.960997 4829 scope.go:117] "RemoveContainer" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.961355 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": container with ID starting with c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac not found: ID does not exist" containerID="c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961397 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac"} err="failed to get container status \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": rpc error: code = NotFound desc = could not find container \"c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac\": container with ID starting with c894bc6bcb5c12f31a41dc3d4093604e237ded6b3c76c0fa3b9e89cba18046ac not found: ID does not exist" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961421 4829 scope.go:117] "RemoveContainer" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.961853 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": container with ID starting with dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9 not found: ID does not exist" containerID="dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961882 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9"} err="failed to get container status \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": rpc error: code = NotFound desc = could not find container \"dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9\": container with ID starting with dae3465ea6651b1c968e4da2e342f4698fb0795720e6d50d1073abab5d802fc9 not found: ID does not exist" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.961901 4829 scope.go:117] "RemoveContainer" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: E0217 16:36:39.962366 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": container with ID starting with 11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689 not found: ID does not exist" containerID="11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689" Feb 17 16:36:39 crc kubenswrapper[4829]: I0217 16:36:39.962395 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689"} err="failed to get container status \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": rpc error: code = NotFound desc = could not find container \"11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689\": container with ID starting with 11f34aaedea256bcac179664885b594b27ad4f73b7891dadb6553d3740671689 not found: ID does not exist" Feb 17 16:36:40 crc kubenswrapper[4829]: I0217 16:36:40.291593 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" path="/var/lib/kubelet/pods/21ab4c2e-a614-46f8-b7fc-259bacfeb8b4/volumes" Feb 17 16:36:47 crc kubenswrapper[4829]: E0217 16:36:47.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:36:49 crc kubenswrapper[4829]: I0217 16:36:49.279727 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:36:49 crc kubenswrapper[4829]: E0217 16:36:49.280913 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:36:52 crc kubenswrapper[4829]: E0217 16:36:52.283273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:36:59 crc kubenswrapper[4829]: E0217 16:36:59.283651 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:04 crc kubenswrapper[4829]: I0217 16:37:04.280554 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:37:04 crc kubenswrapper[4829]: E0217 16:37:04.283320 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:05 crc kubenswrapper[4829]: I0217 16:37:05.133670 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} Feb 17 16:37:13 crc kubenswrapper[4829]: E0217 16:37:13.281724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:17 crc kubenswrapper[4829]: E0217 16:37:17.281524 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:24 crc kubenswrapper[4829]: E0217 16:37:24.283779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:30 crc kubenswrapper[4829]: E0217 16:37:30.281996 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:36 crc kubenswrapper[4829]: E0217 16:37:36.282727 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:45 crc kubenswrapper[4829]: E0217 16:37:45.281951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:48 crc kubenswrapper[4829]: E0217 16:37:48.303891 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:37:56 crc kubenswrapper[4829]: E0217 16:37:56.282197 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:37:59 crc kubenswrapper[4829]: E0217 16:37:59.286306 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:09 crc kubenswrapper[4829]: E0217 16:38:09.283789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130099 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130562 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.130704 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:38:13 crc kubenswrapper[4829]: E0217 16:38:13.132615 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:20 crc kubenswrapper[4829]: E0217 16:38:20.283593 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:24 crc kubenswrapper[4829]: E0217 16:38:24.282700 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.407268 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.408190 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.408491 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:38:32 crc kubenswrapper[4829]: E0217 16:38:32.409807 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:38 crc kubenswrapper[4829]: E0217 16:38:38.306693 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:38:46 crc kubenswrapper[4829]: E0217 16:38:46.283714 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:38:51 crc kubenswrapper[4829]: E0217 16:38:51.283224 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:01 crc kubenswrapper[4829]: E0217 16:39:01.285208 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:06 crc kubenswrapper[4829]: E0217 16:39:06.282436 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.561992 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562760 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-utilities" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562785 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-utilities" Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562825 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-content" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562837 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="extract-content" Feb 17 16:39:07 crc kubenswrapper[4829]: E0217 16:39:07.562883 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.562896 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.563294 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ab4c2e-a614-46f8-b7fc-259bacfeb8b4" containerName="registry-server" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.570754 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.613061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628015 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628135 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.628231 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730374 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730427 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730852 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.730976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.755386 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"redhat-marketplace-7c56n\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:07 crc kubenswrapper[4829]: I0217 16:39:07.893016 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.439319 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763450 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" exitCode=0 Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763512 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482"} Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.763760 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"2f5f9ac884b93c77a1abad82cb7157f8f7dddf20536b72ef99bb6974aee0fb66"} Feb 17 16:39:08 crc kubenswrapper[4829]: I0217 16:39:08.765993 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:39:09 crc kubenswrapper[4829]: I0217 16:39:09.777000 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} Feb 17 16:39:10 crc kubenswrapper[4829]: I0217 16:39:10.799792 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" exitCode=0 Feb 17 16:39:10 crc kubenswrapper[4829]: I0217 16:39:10.800129 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} Feb 17 16:39:12 crc kubenswrapper[4829]: I0217 16:39:12.829007 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerStarted","Data":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} Feb 17 16:39:12 crc kubenswrapper[4829]: I0217 16:39:12.854664 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7c56n" podStartSLOduration=2.303761266 podStartE2EDuration="5.85464276s" podCreationTimestamp="2026-02-17 16:39:07 +0000 UTC" firstStartedPulling="2026-02-17 16:39:08.765447122 +0000 UTC m=+2661.182465100" lastFinishedPulling="2026-02-17 16:39:12.316328606 +0000 UTC m=+2664.733346594" observedRunningTime="2026-02-17 16:39:12.849364258 +0000 UTC m=+2665.266382246" watchObservedRunningTime="2026-02-17 16:39:12.85464276 +0000 UTC m=+2665.271660738" Feb 17 16:39:14 crc kubenswrapper[4829]: E0217 16:39:14.281327 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.893952 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.894684 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:17 crc kubenswrapper[4829]: I0217 16:39:17.957353 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:18 crc kubenswrapper[4829]: I0217 16:39:18.960981 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:19 crc kubenswrapper[4829]: I0217 16:39:19.025589 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:19 crc kubenswrapper[4829]: E0217 16:39:19.283007 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:20 crc kubenswrapper[4829]: I0217 16:39:20.910950 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7c56n" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" containerID="cri-o://085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" gracePeriod=2 Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.479767 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601666 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601737 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.601931 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") pod \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\" (UID: \"d2f1183e-fedb-40ba-83b4-9ae43daefc72\") " Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.603401 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities" (OuterVolumeSpecName: "utilities") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.608860 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz" (OuterVolumeSpecName: "kube-api-access-dqnxz") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "kube-api-access-dqnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.644325 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2f1183e-fedb-40ba-83b4-9ae43daefc72" (UID: "d2f1183e-fedb-40ba-83b4-9ae43daefc72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705636 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705684 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnxz\" (UniqueName: \"kubernetes.io/projected/d2f1183e-fedb-40ba-83b4-9ae43daefc72-kube-api-access-dqnxz\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.705699 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f1183e-fedb-40ba-83b4-9ae43daefc72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926216 4829 generic.go:334] "Generic (PLEG): container finished" podID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" exitCode=0 Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926303 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7c56n" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926324 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926382 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7c56n" event={"ID":"d2f1183e-fedb-40ba-83b4-9ae43daefc72","Type":"ContainerDied","Data":"2f5f9ac884b93c77a1abad82cb7157f8f7dddf20536b72ef99bb6974aee0fb66"} Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.926401 4829 scope.go:117] "RemoveContainer" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.967653 4829 scope.go:117] "RemoveContainer" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:21 crc kubenswrapper[4829]: I0217 16:39:21.996829 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.004841 4829 scope.go:117] "RemoveContainer" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.017406 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7c56n"] Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.071921 4829 scope.go:117] "RemoveContainer" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.072542 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": container with ID starting with 085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557 not found: ID does not exist" containerID="085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.072642 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557"} err="failed to get container status \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": rpc error: code = NotFound desc = could not find container \"085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557\": container with ID starting with 085f2bae7e0aac5f1733d5c942b026da5219342aec3c02f81862aff3b22f3557 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.072691 4829 scope.go:117] "RemoveContainer" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.073228 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": container with ID starting with bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3 not found: ID does not exist" containerID="bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073273 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3"} err="failed to get container status \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": rpc error: code = NotFound desc = could not find container \"bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3\": container with ID starting with bb27625cbe4dbb6ffefd054a194acbb9d2479e3692a3779530af81919cea26f3 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073302 4829 scope.go:117] "RemoveContainer" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: E0217 16:39:22.073684 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": container with ID starting with fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482 not found: ID does not exist" containerID="fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.073810 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482"} err="failed to get container status \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": rpc error: code = NotFound desc = could not find container \"fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482\": container with ID starting with fce5091b82017e8ced7a42bcc3d3adbdbad8c55eb93b30ff5bd4beb209494482 not found: ID does not exist" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.309906 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" path="/var/lib/kubelet/pods/d2f1183e-fedb-40ba-83b4-9ae43daefc72/volumes" Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.425329 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:22 crc kubenswrapper[4829]: I0217 16:39:22.425422 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:25 crc kubenswrapper[4829]: E0217 16:39:25.283265 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:33 crc kubenswrapper[4829]: E0217 16:39:33.282008 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:39 crc kubenswrapper[4829]: E0217 16:39:39.282743 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:39:45 crc kubenswrapper[4829]: E0217 16:39:45.283540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:39:52 crc kubenswrapper[4829]: I0217 16:39:52.424266 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:52 crc kubenswrapper[4829]: I0217 16:39:52.424911 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:54 crc kubenswrapper[4829]: E0217 16:39:54.283089 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:00 crc kubenswrapper[4829]: E0217 16:40:00.282649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:03 crc kubenswrapper[4829]: I0217 16:40:03.473974 4829 generic.go:334] "Generic (PLEG): container finished" podID="30690071-6fc2-4647-82c0-6e5234005aec" containerID="17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d" exitCode=2 Feb 17 16:40:03 crc kubenswrapper[4829]: I0217 16:40:03.474371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerDied","Data":"17be56dc991459c60c3b714ec5bde42f8f35e9ec67b126c3189fc199ba0c0f0d"} Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.121530 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.170717 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.170809 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.171073 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") pod \"30690071-6fc2-4647-82c0-6e5234005aec\" (UID: \"30690071-6fc2-4647-82c0-6e5234005aec\") " Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.185891 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn" (OuterVolumeSpecName: "kube-api-access-vgbsn") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "kube-api-access-vgbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.208805 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory" (OuterVolumeSpecName: "inventory") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.233445 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30690071-6fc2-4647-82c0-6e5234005aec" (UID: "30690071-6fc2-4647-82c0-6e5234005aec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274684 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274740 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30690071-6fc2-4647-82c0-6e5234005aec-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.274753 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgbsn\" (UniqueName: \"kubernetes.io/projected/30690071-6fc2-4647-82c0-6e5234005aec-kube-api-access-vgbsn\") on node \"crc\" DevicePath \"\"" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499085 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" event={"ID":"30690071-6fc2-4647-82c0-6e5234005aec","Type":"ContainerDied","Data":"5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099"} Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499122 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717ec95b0163a4cb2968a7f5092a77943894dd653eb733bf6bc122420d46099" Feb 17 16:40:05 crc kubenswrapper[4829]: I0217 16:40:05.499157 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp7df" Feb 17 16:40:06 crc kubenswrapper[4829]: E0217 16:40:06.284353 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:11 crc kubenswrapper[4829]: E0217 16:40:11.282866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:20 crc kubenswrapper[4829]: E0217 16:40:20.281555 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.424790 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.425419 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.425478 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.426560 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.426693 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" gracePeriod=600 Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746428 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" exitCode=0 Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746510 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28"} Feb 17 16:40:22 crc kubenswrapper[4829]: I0217 16:40:22.746855 4829 scope.go:117] "RemoveContainer" containerID="3ab7b402a56655922b0ce243820c1c94a9074e9faf65d01320c06531744f3a8d" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.033802 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034383 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034409 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034478 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034488 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034498 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-content" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034505 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-content" Feb 17 16:40:23 crc kubenswrapper[4829]: E0217 16:40:23.034525 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-utilities" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034533 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="extract-utilities" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034822 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f1183e-fedb-40ba-83b4-9ae43daefc72" containerName="registry-server" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.034862 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="30690071-6fc2-4647-82c0-6e5234005aec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.035844 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039194 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039552 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.039723 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.040372 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.050195 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170408 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170540 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.170782 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272560 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272660 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.272782 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.279289 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.286121 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.293912 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.355815 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.763500 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} Feb 17 16:40:23 crc kubenswrapper[4829]: I0217 16:40:23.911110 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt"] Feb 17 16:40:23 crc kubenswrapper[4829]: W0217 16:40:23.915536 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0fd9f61_596b_4ef3_b6da_6ebe6b04d497.slice/crio-a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3 WatchSource:0}: Error finding container a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3: Status 404 returned error can't find the container with id a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3 Feb 17 16:40:24 crc kubenswrapper[4829]: E0217 16:40:24.282685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.778165 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerStarted","Data":"567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71"} Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.779654 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerStarted","Data":"a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3"} Feb 17 16:40:24 crc kubenswrapper[4829]: I0217 16:40:24.804727 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" podStartSLOduration=1.286118851 podStartE2EDuration="1.804707494s" podCreationTimestamp="2026-02-17 16:40:23 +0000 UTC" firstStartedPulling="2026-02-17 16:40:23.918489701 +0000 UTC m=+2736.335507679" lastFinishedPulling="2026-02-17 16:40:24.437078334 +0000 UTC m=+2736.854096322" observedRunningTime="2026-02-17 16:40:24.802295059 +0000 UTC m=+2737.219313037" watchObservedRunningTime="2026-02-17 16:40:24.804707494 +0000 UTC m=+2737.221725472" Feb 17 16:40:33 crc kubenswrapper[4829]: E0217 16:40:33.283787 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:35 crc kubenswrapper[4829]: E0217 16:40:35.297155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:48 crc kubenswrapper[4829]: E0217 16:40:48.294765 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:40:48 crc kubenswrapper[4829]: E0217 16:40:48.295518 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:40:59 crc kubenswrapper[4829]: E0217 16:40:59.281704 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:00 crc kubenswrapper[4829]: E0217 16:41:00.283894 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:12 crc kubenswrapper[4829]: E0217 16:41:12.282175 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:15 crc kubenswrapper[4829]: E0217 16:41:15.281599 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:24 crc kubenswrapper[4829]: E0217 16:41:24.281618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.564241 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.569106 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.580012 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657270 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657490 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.657809 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760076 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760260 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760381 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760887 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.760956 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.783091 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"community-operators-sdh9b\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:25 crc kubenswrapper[4829]: I0217 16:41:25.895760 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:26 crc kubenswrapper[4829]: I0217 16:41:26.461727 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:26 crc kubenswrapper[4829]: I0217 16:41:26.525809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"531071b097d235504f97e76bdf7dd4e2670ea82dee119089f6be91830c6db602"} Feb 17 16:41:27 crc kubenswrapper[4829]: I0217 16:41:27.535914 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750" exitCode=0 Feb 17 16:41:27 crc kubenswrapper[4829]: I0217 16:41:27.536112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750"} Feb 17 16:41:28 crc kubenswrapper[4829]: I0217 16:41:28.551513 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d"} Feb 17 16:41:30 crc kubenswrapper[4829]: E0217 16:41:30.286115 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:30 crc kubenswrapper[4829]: I0217 16:41:30.575140 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d" exitCode=0 Feb 17 16:41:30 crc kubenswrapper[4829]: I0217 16:41:30.575175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d"} Feb 17 16:41:31 crc kubenswrapper[4829]: I0217 16:41:31.586120 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerStarted","Data":"a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f"} Feb 17 16:41:31 crc kubenswrapper[4829]: I0217 16:41:31.615228 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sdh9b" podStartSLOduration=3.179505795 podStartE2EDuration="6.615208021s" podCreationTimestamp="2026-02-17 16:41:25 +0000 UTC" firstStartedPulling="2026-02-17 16:41:27.538492569 +0000 UTC m=+2799.955510547" lastFinishedPulling="2026-02-17 16:41:30.974194795 +0000 UTC m=+2803.391212773" observedRunningTime="2026-02-17 16:41:31.605828469 +0000 UTC m=+2804.022846457" watchObservedRunningTime="2026-02-17 16:41:31.615208021 +0000 UTC m=+2804.032225999" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.932623 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.935937 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.953455 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970569 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970824 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:32 crc kubenswrapper[4829]: I0217 16:41:32.970873 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.073899 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074018 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074112 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.074545 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.075018 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.101536 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"redhat-operators-lg2b5\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.270367 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:41:33 crc kubenswrapper[4829]: I0217 16:41:33.869895 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:41:33 crc kubenswrapper[4829]: W0217 16:41:33.871161 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcafaefdf_5318_4146_bf8f_f2e8d5d83ec6.slice/crio-8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9 WatchSource:0}: Error finding container 8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9: Status 404 returned error can't find the container with id 8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9 Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641100 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" exitCode=0 Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641454 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586"} Feb 17 16:41:34 crc kubenswrapper[4829]: I0217 16:41:34.641489 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9"} Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.896490 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.896938 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:35 crc kubenswrapper[4829]: I0217 16:41:35.949606 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:36 crc kubenswrapper[4829]: E0217 16:41:36.283719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:36 crc kubenswrapper[4829]: I0217 16:41:36.729105 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:37 crc kubenswrapper[4829]: I0217 16:41:37.125643 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:38 crc kubenswrapper[4829]: I0217 16:41:38.690567 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} Feb 17 16:41:38 crc kubenswrapper[4829]: I0217 16:41:38.691455 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdh9b" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" containerID="cri-o://a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" gracePeriod=2 Feb 17 16:41:39 crc kubenswrapper[4829]: I0217 16:41:39.703898 4829 generic.go:334] "Generic (PLEG): container finished" podID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerID="a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" exitCode=0 Feb 17 16:41:39 crc kubenswrapper[4829]: I0217 16:41:39.703955 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f"} Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.204435 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285646 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285711 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.285897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") pod \"939a62be-82dd-4a76-9dc2-8fbadadc3739\" (UID: \"939a62be-82dd-4a76-9dc2-8fbadadc3739\") " Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.286773 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities" (OuterVolumeSpecName: "utilities") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.304399 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x" (OuterVolumeSpecName: "kube-api-access-4vk9x") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "kube-api-access-4vk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.342454 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "939a62be-82dd-4a76-9dc2-8fbadadc3739" (UID: "939a62be-82dd-4a76-9dc2-8fbadadc3739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.388102 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.389074 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/939a62be-82dd-4a76-9dc2-8fbadadc3739-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.389204 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vk9x\" (UniqueName: \"kubernetes.io/projected/939a62be-82dd-4a76-9dc2-8fbadadc3739-kube-api-access-4vk9x\") on node \"crc\" DevicePath \"\"" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdh9b" event={"ID":"939a62be-82dd-4a76-9dc2-8fbadadc3739","Type":"ContainerDied","Data":"531071b097d235504f97e76bdf7dd4e2670ea82dee119089f6be91830c6db602"} Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717549 4829 scope.go:117] "RemoveContainer" containerID="a0f9358c42fc2a26c30ce18355c5f1417a967409e9274c0e9ba49db30356367f" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.717604 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdh9b" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.763614 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.777955 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdh9b"] Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.846799 4829 scope.go:117] "RemoveContainer" containerID="1f16cd06e0ecb1fbc7ba351892cd9bd01655a3c61afb55af7668f04ac59d886d" Feb 17 16:41:40 crc kubenswrapper[4829]: I0217 16:41:40.881632 4829 scope.go:117] "RemoveContainer" containerID="ea5e9b46326bb8a1c73022fdd8140fbdac504f4a3d4dc4c3f9535788ec7f1750" Feb 17 16:41:42 crc kubenswrapper[4829]: I0217 16:41:42.293343 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" path="/var/lib/kubelet/pods/939a62be-82dd-4a76-9dc2-8fbadadc3739/volumes" Feb 17 16:41:43 crc kubenswrapper[4829]: E0217 16:41:43.280649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:45 crc kubenswrapper[4829]: I0217 16:41:45.794425 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" exitCode=0 Feb 17 16:41:45 crc kubenswrapper[4829]: I0217 16:41:45.795639 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} Feb 17 16:41:49 crc kubenswrapper[4829]: E0217 16:41:49.281649 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:41:55 crc kubenswrapper[4829]: E0217 16:41:55.429930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:41:56 crc kubenswrapper[4829]: I0217 16:41:56.962809 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerStarted","Data":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} Feb 17 16:41:56 crc kubenswrapper[4829]: I0217 16:41:56.990207 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lg2b5" podStartSLOduration=4.204083472 podStartE2EDuration="24.990180678s" podCreationTimestamp="2026-02-17 16:41:32 +0000 UTC" firstStartedPulling="2026-02-17 16:41:34.645243213 +0000 UTC m=+2807.062261191" lastFinishedPulling="2026-02-17 16:41:55.431340419 +0000 UTC m=+2827.848358397" observedRunningTime="2026-02-17 16:41:56.979649235 +0000 UTC m=+2829.396667213" watchObservedRunningTime="2026-02-17 16:41:56.990180678 +0000 UTC m=+2829.407198676" Feb 17 16:42:03 crc kubenswrapper[4829]: I0217 16:42:03.271083 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:03 crc kubenswrapper[4829]: I0217 16:42:03.271941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:03 crc kubenswrapper[4829]: E0217 16:42:03.283246 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:04 crc kubenswrapper[4829]: I0217 16:42:04.325089 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lg2b5" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" probeResult="failure" output=< Feb 17 16:42:04 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:42:04 crc kubenswrapper[4829]: > Feb 17 16:42:10 crc kubenswrapper[4829]: E0217 16:42:10.282466 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.336017 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.402419 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:13 crc kubenswrapper[4829]: I0217 16:42:13.586851 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:15 crc kubenswrapper[4829]: I0217 16:42:15.177303 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lg2b5" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" containerID="cri-o://ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" gracePeriod=2 Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:15.843128 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.023568 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024240 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024305 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") pod \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\" (UID: \"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6\") " Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.024564 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities" (OuterVolumeSpecName: "utilities") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.025056 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.045962 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr" (OuterVolumeSpecName: "kube-api-access-fpqlr") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "kube-api-access-fpqlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.129254 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpqlr\" (UniqueName: \"kubernetes.io/projected/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-kube-api-access-fpqlr\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.166512 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" (UID: "cafaefdf-5318-4146-bf8f-f2e8d5d83ec6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190525 4829 generic.go:334] "Generic (PLEG): container finished" podID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" exitCode=0 Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190566 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190616 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2b5" event={"ID":"cafaefdf-5318-4146-bf8f-f2e8d5d83ec6","Type":"ContainerDied","Data":"8308f625d9bfa39cd116b7ff15507df451e4cd0f0ed35b7683e9842f868414d9"} Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190637 4829 scope.go:117] "RemoveContainer" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.190785 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2b5" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.220747 4829 scope.go:117] "RemoveContainer" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.237154 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.239061 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.263417 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lg2b5"] Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.267722 4829 scope.go:117] "RemoveContainer" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.280854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.294969 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" path="/var/lib/kubelet/pods/cafaefdf-5318-4146-bf8f-f2e8d5d83ec6/volumes" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.321794 4829 scope.go:117] "RemoveContainer" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.322311 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": container with ID starting with ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8 not found: ID does not exist" containerID="ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322377 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8"} err="failed to get container status \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": rpc error: code = NotFound desc = could not find container \"ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8\": container with ID starting with ebcce11d3a839abbd691c26f957d0f49c594d9ab209b1f3371b7fa7003567ea8 not found: ID does not exist" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322407 4829 scope.go:117] "RemoveContainer" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.322931 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": container with ID starting with 1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251 not found: ID does not exist" containerID="1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.322978 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251"} err="failed to get container status \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": rpc error: code = NotFound desc = could not find container \"1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251\": container with ID starting with 1ecd307073db9b68a13e61e35399211e1e88f1d5aa4ffa71a3e1a4b9eafcd251 not found: ID does not exist" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.323006 4829 scope.go:117] "RemoveContainer" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: E0217 16:42:16.323404 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": container with ID starting with 360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586 not found: ID does not exist" containerID="360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586" Feb 17 16:42:16 crc kubenswrapper[4829]: I0217 16:42:16.323440 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586"} err="failed to get container status \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": rpc error: code = NotFound desc = could not find container \"360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586\": container with ID starting with 360ac4e3501520089df703e4acffb3599e1b8f61a61bf6b292a59a2b46767586 not found: ID does not exist" Feb 17 16:42:21 crc kubenswrapper[4829]: E0217 16:42:21.283546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:22 crc kubenswrapper[4829]: I0217 16:42:22.424969 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:22 crc kubenswrapper[4829]: I0217 16:42:22.425254 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:27 crc kubenswrapper[4829]: E0217 16:42:27.282945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:33 crc kubenswrapper[4829]: E0217 16:42:33.282528 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:42 crc kubenswrapper[4829]: E0217 16:42:42.282037 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:48 crc kubenswrapper[4829]: E0217 16:42:48.282168 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:42:52 crc kubenswrapper[4829]: I0217 16:42:52.440971 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:52 crc kubenswrapper[4829]: I0217 16:42:52.441751 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:55 crc kubenswrapper[4829]: E0217 16:42:55.281339 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:42:59 crc kubenswrapper[4829]: E0217 16:42:59.284252 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:07 crc kubenswrapper[4829]: E0217 16:43:07.283052 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:11 crc kubenswrapper[4829]: E0217 16:43:11.280928 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.406942 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.407399 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.407526 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.409327 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.424798 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.424896 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.425670 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.426556 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.426705 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" gracePeriod=600 Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.556537 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994285 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" exitCode=0 Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994335 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322"} Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.994372 4829 scope.go:117] "RemoveContainer" containerID="9c407bf91e4a7bac6b209d48673d2558d9000252c3665c4be3c76afd93057c28" Feb 17 16:43:22 crc kubenswrapper[4829]: I0217 16:43:22.995096 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:22 crc kubenswrapper[4829]: E0217 16:43:22.996010 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:23 crc kubenswrapper[4829]: E0217 16:43:23.281489 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:36 crc kubenswrapper[4829]: I0217 16:43:36.281543 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:36 crc kubenswrapper[4829]: E0217 16:43:36.282389 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.283382 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412120 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412190 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.412332 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:37 crc kubenswrapper[4829]: E0217 16:43:37.414149 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:48 crc kubenswrapper[4829]: E0217 16:43:48.284706 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:43:49 crc kubenswrapper[4829]: I0217 16:43:49.279237 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:43:49 crc kubenswrapper[4829]: E0217 16:43:49.279789 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:43:49 crc kubenswrapper[4829]: E0217 16:43:49.281102 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:00 crc kubenswrapper[4829]: I0217 16:44:00.279723 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:00 crc kubenswrapper[4829]: E0217 16:44:00.280587 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:01 crc kubenswrapper[4829]: E0217 16:44:01.281161 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:03 crc kubenswrapper[4829]: E0217 16:44:03.282967 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:12 crc kubenswrapper[4829]: E0217 16:44:12.283819 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:14 crc kubenswrapper[4829]: I0217 16:44:14.279902 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:14 crc kubenswrapper[4829]: E0217 16:44:14.281152 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:16 crc kubenswrapper[4829]: E0217 16:44:16.282519 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:25 crc kubenswrapper[4829]: E0217 16:44:25.282533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:28 crc kubenswrapper[4829]: I0217 16:44:28.289462 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:28 crc kubenswrapper[4829]: E0217 16:44:28.290351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:31 crc kubenswrapper[4829]: E0217 16:44:31.281280 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:37 crc kubenswrapper[4829]: E0217 16:44:37.283282 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:43 crc kubenswrapper[4829]: I0217 16:44:43.279603 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:43 crc kubenswrapper[4829]: E0217 16:44:43.280387 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:43 crc kubenswrapper[4829]: E0217 16:44:43.282286 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:44:49 crc kubenswrapper[4829]: E0217 16:44:49.282048 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:44:54 crc kubenswrapper[4829]: I0217 16:44:54.279765 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:44:54 crc kubenswrapper[4829]: E0217 16:44:54.280582 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:44:55 crc kubenswrapper[4829]: E0217 16:44:55.282495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.174792 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175884 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175901 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175917 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175924 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175948 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.175977 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.175985 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.176009 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176016 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-utilities" Feb 17 16:45:00 crc kubenswrapper[4829]: E0217 16:45:00.176033 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176041 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="extract-content" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176309 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafaefdf-5318-4146-bf8f-f2e8d5d83ec6" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.176359 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="939a62be-82dd-4a76-9dc2-8fbadadc3739" containerName="registry-server" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.177376 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.180673 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.183560 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.190313 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315403 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315591 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.315621 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418284 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418518 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.418551 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.419560 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.424291 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.435089 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"collect-profiles-29522445-h7tqt\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.500948 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:00 crc kubenswrapper[4829]: I0217 16:45:00.967318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 16:45:00 crc kubenswrapper[4829]: W0217 16:45:00.980251 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ddee5a9_0539_4387_8a52_5a41ca147e35.slice/crio-8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd WatchSource:0}: Error finding container 8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd: Status 404 returned error can't find the container with id 8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.190411 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerStarted","Data":"1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2"} Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.190450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerStarted","Data":"8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd"} Feb 17 16:45:01 crc kubenswrapper[4829]: I0217 16:45:01.216525 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" podStartSLOduration=1.216507446 podStartE2EDuration="1.216507446s" podCreationTimestamp="2026-02-17 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:45:01.205958995 +0000 UTC m=+3013.622976973" watchObservedRunningTime="2026-02-17 16:45:01.216507446 +0000 UTC m=+3013.633525424" Feb 17 16:45:02 crc kubenswrapper[4829]: I0217 16:45:02.206265 4829 generic.go:334] "Generic (PLEG): container finished" podID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerID="1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2" exitCode=0 Feb 17 16:45:02 crc kubenswrapper[4829]: I0217 16:45:02.206361 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerDied","Data":"1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2"} Feb 17 16:45:03 crc kubenswrapper[4829]: E0217 16:45:03.281551 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.659806 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819615 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819707 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.819802 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") pod \"8ddee5a9-0539-4387-8a52-5a41ca147e35\" (UID: \"8ddee5a9-0539-4387-8a52-5a41ca147e35\") " Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.820447 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.820961 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ddee5a9-0539-4387-8a52-5a41ca147e35-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.825777 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574" (OuterVolumeSpecName: "kube-api-access-dn574") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "kube-api-access-dn574". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.830760 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ddee5a9-0539-4387-8a52-5a41ca147e35" (UID: "8ddee5a9-0539-4387-8a52-5a41ca147e35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.922764 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ddee5a9-0539-4387-8a52-5a41ca147e35-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4829]: I0217 16:45:03.922793 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn574\" (UniqueName: \"kubernetes.io/projected/8ddee5a9-0539-4387-8a52-5a41ca147e35-kube-api-access-dn574\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229801 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" event={"ID":"8ddee5a9-0539-4387-8a52-5a41ca147e35","Type":"ContainerDied","Data":"8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd"} Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229876 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc35d7b9383ec49f3d4a201088c265c637c62fdd6508368782ab2872e7d43dd" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.229966 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt" Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.305966 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:45:04 crc kubenswrapper[4829]: I0217 16:45:04.318147 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-sbp9p"] Feb 17 16:45:06 crc kubenswrapper[4829]: E0217 16:45:06.281058 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:06 crc kubenswrapper[4829]: I0217 16:45:06.293993 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5695ec4a-a69a-4e62-9ddd-c9cea43413a9" path="/var/lib/kubelet/pods/5695ec4a-a69a-4e62-9ddd-c9cea43413a9/volumes" Feb 17 16:45:07 crc kubenswrapper[4829]: I0217 16:45:07.280032 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:07 crc kubenswrapper[4829]: E0217 16:45:07.280692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:17 crc kubenswrapper[4829]: E0217 16:45:17.284093 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:18 crc kubenswrapper[4829]: E0217 16:45:18.293185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:18 crc kubenswrapper[4829]: I0217 16:45:18.898683 4829 scope.go:117] "RemoveContainer" containerID="389d0351ed8637b14697e9cc82978b1a3b1ec333a82559ba657a0e790d1a453d" Feb 17 16:45:19 crc kubenswrapper[4829]: I0217 16:45:19.279622 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:19 crc kubenswrapper[4829]: E0217 16:45:19.280026 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:31 crc kubenswrapper[4829]: E0217 16:45:31.281762 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:31 crc kubenswrapper[4829]: E0217 16:45:31.281840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:34 crc kubenswrapper[4829]: I0217 16:45:34.279432 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:34 crc kubenswrapper[4829]: E0217 16:45:34.280059 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:42 crc kubenswrapper[4829]: E0217 16:45:42.283695 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:45 crc kubenswrapper[4829]: I0217 16:45:45.279912 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:45 crc kubenswrapper[4829]: E0217 16:45:45.280499 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:45:46 crc kubenswrapper[4829]: E0217 16:45:46.281292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:45:53 crc kubenswrapper[4829]: E0217 16:45:53.282398 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:45:57 crc kubenswrapper[4829]: I0217 16:45:57.280533 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:45:57 crc kubenswrapper[4829]: E0217 16:45:57.281187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:01 crc kubenswrapper[4829]: E0217 16:46:01.284276 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:07 crc kubenswrapper[4829]: E0217 16:46:07.281495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:09 crc kubenswrapper[4829]: I0217 16:46:09.279602 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:09 crc kubenswrapper[4829]: E0217 16:46:09.280455 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:15 crc kubenswrapper[4829]: E0217 16:46:15.282754 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:20 crc kubenswrapper[4829]: I0217 16:46:20.280138 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:20 crc kubenswrapper[4829]: E0217 16:46:20.281071 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:22 crc kubenswrapper[4829]: E0217 16:46:22.281330 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.543706 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:22 crc kubenswrapper[4829]: E0217 16:46:22.544342 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.544369 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.544715 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" containerName="collect-profiles" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.549450 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.558879 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678167 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678605 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.678860 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.780830 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.780956 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781053 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781604 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.781683 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.802275 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"certified-operators-psxcg\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:22 crc kubenswrapper[4829]: I0217 16:46:22.877050 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:23 crc kubenswrapper[4829]: I0217 16:46:23.470626 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087526 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" exitCode=0 Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087649 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b"} Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.087914 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"b837a4f7d0720eda0be84215e50b60a7a3dc027a4e3757bb03a0162d743b5e59"} Feb 17 16:46:24 crc kubenswrapper[4829]: I0217 16:46:24.090780 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:46:25 crc kubenswrapper[4829]: I0217 16:46:25.100630 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} Feb 17 16:46:27 crc kubenswrapper[4829]: I0217 16:46:27.122918 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" exitCode=0 Feb 17 16:46:27 crc kubenswrapper[4829]: I0217 16:46:27.122996 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} Feb 17 16:46:28 crc kubenswrapper[4829]: I0217 16:46:28.145982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerStarted","Data":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} Feb 17 16:46:28 crc kubenswrapper[4829]: I0217 16:46:28.168263 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-psxcg" podStartSLOduration=2.694145655 podStartE2EDuration="6.168246708s" podCreationTimestamp="2026-02-17 16:46:22 +0000 UTC" firstStartedPulling="2026-02-17 16:46:24.090311455 +0000 UTC m=+3096.507329463" lastFinishedPulling="2026-02-17 16:46:27.564412538 +0000 UTC m=+3099.981430516" observedRunningTime="2026-02-17 16:46:28.164898469 +0000 UTC m=+3100.581916447" watchObservedRunningTime="2026-02-17 16:46:28.168246708 +0000 UTC m=+3100.585264686" Feb 17 16:46:29 crc kubenswrapper[4829]: E0217 16:46:29.281687 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:31 crc kubenswrapper[4829]: I0217 16:46:31.279585 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:31 crc kubenswrapper[4829]: E0217 16:46:31.281272 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.877341 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.877729 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:32 crc kubenswrapper[4829]: I0217 16:46:32.931854 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:33 crc kubenswrapper[4829]: I0217 16:46:33.258409 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:33 crc kubenswrapper[4829]: I0217 16:46:33.311654 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.230483 4829 generic.go:334] "Generic (PLEG): container finished" podID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerID="567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71" exitCode=2 Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.230916 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-psxcg" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" containerID="cri-o://b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" gracePeriod=2 Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.231214 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerDied","Data":"567a7edf286bfbbdd02739d68013ec3613f47cb7969832841de557867cef3b71"} Feb 17 16:46:35 crc kubenswrapper[4829]: E0217 16:46:35.280874 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.852858 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.908882 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.909191 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.909340 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") pod \"39b694ae-4f43-4017-a530-197ed7e3a433\" (UID: \"39b694ae-4f43-4017-a530-197ed7e3a433\") " Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.910554 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities" (OuterVolumeSpecName: "utilities") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.915129 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp" (OuterVolumeSpecName: "kube-api-access-q85tp") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "kube-api-access-q85tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:46:35 crc kubenswrapper[4829]: I0217 16:46:35.965213 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39b694ae-4f43-4017-a530-197ed7e3a433" (UID: "39b694ae-4f43-4017-a530-197ed7e3a433"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013463 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013523 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b694ae-4f43-4017-a530-197ed7e3a433-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.013545 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85tp\" (UniqueName: \"kubernetes.io/projected/39b694ae-4f43-4017-a530-197ed7e3a433-kube-api-access-q85tp\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248550 4829 generic.go:334] "Generic (PLEG): container finished" podID="39b694ae-4f43-4017-a530-197ed7e3a433" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" exitCode=0 Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248693 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-psxcg" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248750 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248822 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-psxcg" event={"ID":"39b694ae-4f43-4017-a530-197ed7e3a433","Type":"ContainerDied","Data":"b837a4f7d0720eda0be84215e50b60a7a3dc027a4e3757bb03a0162d743b5e59"} Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.248847 4829 scope.go:117] "RemoveContainer" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.288619 4829 scope.go:117] "RemoveContainer" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.331628 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.332781 4829 scope.go:117] "RemoveContainer" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.346073 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-psxcg"] Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.399146 4829 scope.go:117] "RemoveContainer" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.402168 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": container with ID starting with b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb not found: ID does not exist" containerID="b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402201 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb"} err="failed to get container status \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": rpc error: code = NotFound desc = could not find container \"b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb\": container with ID starting with b87af106684613ae3be4dd524350b9668b37623164043d0cd5c4e793b1b49dbb not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402226 4829 scope.go:117] "RemoveContainer" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.402658 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": container with ID starting with 093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f not found: ID does not exist" containerID="093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402704 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f"} err="failed to get container status \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": rpc error: code = NotFound desc = could not find container \"093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f\": container with ID starting with 093eee92457c5439741cf815673110164fcca402802f1a8b259bcca2e05aeb7f not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.402732 4829 scope.go:117] "RemoveContainer" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: E0217 16:46:36.403026 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": container with ID starting with 8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b not found: ID does not exist" containerID="8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.403074 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b"} err="failed to get container status \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": rpc error: code = NotFound desc = could not find container \"8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b\": container with ID starting with 8a704530e2b7cb91e5af2c14b3676509dbd3097ea34ac497e93d2be6f3ac894b not found: ID does not exist" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.789883 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945250 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945321 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.945379 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") pod \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\" (UID: \"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497\") " Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.951774 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb" (OuterVolumeSpecName: "kube-api-access-24dqb") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "kube-api-access-24dqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:46:36 crc kubenswrapper[4829]: I0217 16:46:36.987623 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory" (OuterVolumeSpecName: "inventory") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.015706 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" (UID: "c0fd9f61-596b-4ef3-b6da-6ebe6b04d497"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049272 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24dqb\" (UniqueName: \"kubernetes.io/projected/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-kube-api-access-24dqb\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049329 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.049343 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0fd9f61-596b-4ef3-b6da-6ebe6b04d497-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.262997 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" event={"ID":"c0fd9f61-596b-4ef3-b6da-6ebe6b04d497","Type":"ContainerDied","Data":"a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3"} Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.263086 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7379c80318f58ad530251e40790bd3bf10117ea8625d9767b248d2cd569f2b3" Feb 17 16:46:37 crc kubenswrapper[4829]: I0217 16:46:37.263029 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt" Feb 17 16:46:38 crc kubenswrapper[4829]: I0217 16:46:38.308152 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" path="/var/lib/kubelet/pods/39b694ae-4f43-4017-a530-197ed7e3a433/volumes" Feb 17 16:46:41 crc kubenswrapper[4829]: E0217 16:46:41.282374 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:46:46 crc kubenswrapper[4829]: I0217 16:46:46.279928 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:46:46 crc kubenswrapper[4829]: E0217 16:46:46.280792 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:46:49 crc kubenswrapper[4829]: E0217 16:46:49.285072 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:46:53 crc kubenswrapper[4829]: E0217 16:46:53.282355 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:01 crc kubenswrapper[4829]: I0217 16:47:01.281238 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:01 crc kubenswrapper[4829]: E0217 16:47:01.282373 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:02 crc kubenswrapper[4829]: E0217 16:47:02.283345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:08 crc kubenswrapper[4829]: E0217 16:47:08.293944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.048423 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049230 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-utilities" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049245 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-utilities" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049281 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049289 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049327 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049334 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.049352 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-content" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049360 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="extract-content" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049666 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fd9f61-596b-4ef3-b6da-6ebe6b04d497" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.049688 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b694ae-4f43-4017-a530-197ed7e3a433" containerName="registry-server" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.050676 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.056466 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.056797 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.058871 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.059007 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.065720 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161262 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161344 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.161404 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264271 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264553 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.264735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.270652 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.270834 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.281728 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.282142 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.283262 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: E0217 16:47:14.283463 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.371368 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:47:14 crc kubenswrapper[4829]: I0217 16:47:14.986279 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5"] Feb 17 16:47:15 crc kubenswrapper[4829]: I0217 16:47:15.683633 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerStarted","Data":"98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083"} Feb 17 16:47:16 crc kubenswrapper[4829]: I0217 16:47:16.700102 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerStarted","Data":"2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946"} Feb 17 16:47:16 crc kubenswrapper[4829]: I0217 16:47:16.716392 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" podStartSLOduration=2.277938567 podStartE2EDuration="2.716376138s" podCreationTimestamp="2026-02-17 16:47:14 +0000 UTC" firstStartedPulling="2026-02-17 16:47:14.987008571 +0000 UTC m=+3147.404026569" lastFinishedPulling="2026-02-17 16:47:15.425446122 +0000 UTC m=+3147.842464140" observedRunningTime="2026-02-17 16:47:16.713907132 +0000 UTC m=+3149.130925110" watchObservedRunningTime="2026-02-17 16:47:16.716376138 +0000 UTC m=+3149.133394116" Feb 17 16:47:20 crc kubenswrapper[4829]: E0217 16:47:20.283946 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:26 crc kubenswrapper[4829]: I0217 16:47:26.280973 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:26 crc kubenswrapper[4829]: E0217 16:47:26.281949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:27 crc kubenswrapper[4829]: E0217 16:47:27.282757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:32 crc kubenswrapper[4829]: E0217 16:47:32.281944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:38 crc kubenswrapper[4829]: I0217 16:47:38.289156 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:38 crc kubenswrapper[4829]: E0217 16:47:38.290036 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:39 crc kubenswrapper[4829]: E0217 16:47:39.281609 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:46 crc kubenswrapper[4829]: E0217 16:47:46.281766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:47:52 crc kubenswrapper[4829]: I0217 16:47:52.279992 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:47:52 crc kubenswrapper[4829]: E0217 16:47:52.280699 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:47:53 crc kubenswrapper[4829]: E0217 16:47:53.281118 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:47:59 crc kubenswrapper[4829]: E0217 16:47:59.281522 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:04 crc kubenswrapper[4829]: I0217 16:48:04.279891 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:04 crc kubenswrapper[4829]: E0217 16:48:04.281606 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:48:07 crc kubenswrapper[4829]: E0217 16:48:07.283067 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:13 crc kubenswrapper[4829]: E0217 16:48:13.282840 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:17 crc kubenswrapper[4829]: I0217 16:48:17.279772 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:17 crc kubenswrapper[4829]: E0217 16:48:17.280667 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:48:19 crc kubenswrapper[4829]: E0217 16:48:19.283137 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421097 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421566 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.421696 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:26 crc kubenswrapper[4829]: E0217 16:48:26.422934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:31 crc kubenswrapper[4829]: I0217 16:48:31.279736 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:48:31 crc kubenswrapper[4829]: E0217 16:48:31.283125 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:32 crc kubenswrapper[4829]: I0217 16:48:32.500365 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} Feb 17 16:48:38 crc kubenswrapper[4829]: E0217 16:48:38.288782 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.414825 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.415333 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.415488 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:44 crc kubenswrapper[4829]: E0217 16:48:44.416658 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:48:53 crc kubenswrapper[4829]: E0217 16:48:53.286557 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:48:58 crc kubenswrapper[4829]: E0217 16:48:58.289430 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:07 crc kubenswrapper[4829]: E0217 16:49:07.282348 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:10 crc kubenswrapper[4829]: E0217 16:49:10.281828 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.504679 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.509079 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.515010 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652726 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652776 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.652862 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756201 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756757 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.756968 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.757205 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.757687 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:14 crc kubenswrapper[4829]: I0217 16:49:14.967976 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"redhat-marketplace-mg6dh\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:15 crc kubenswrapper[4829]: I0217 16:49:15.161734 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:15 crc kubenswrapper[4829]: I0217 16:49:15.692358 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006868 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" exitCode=0 Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006909 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899"} Feb 17 16:49:16 crc kubenswrapper[4829]: I0217 16:49:16.006936 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"b0f4e9aeceebcf8cb08d563b4cc1f0bd60551e4b6fabf6f07540dcc2ec4d3d42"} Feb 17 16:49:17 crc kubenswrapper[4829]: I0217 16:49:17.021711 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} Feb 17 16:49:19 crc kubenswrapper[4829]: I0217 16:49:19.047036 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" exitCode=0 Feb 17 16:49:19 crc kubenswrapper[4829]: I0217 16:49:19.047474 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} Feb 17 16:49:20 crc kubenswrapper[4829]: I0217 16:49:20.063567 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerStarted","Data":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} Feb 17 16:49:20 crc kubenswrapper[4829]: I0217 16:49:20.096401 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mg6dh" podStartSLOduration=2.660681897 podStartE2EDuration="6.0963716s" podCreationTimestamp="2026-02-17 16:49:14 +0000 UTC" firstStartedPulling="2026-02-17 16:49:16.010697622 +0000 UTC m=+3268.427715600" lastFinishedPulling="2026-02-17 16:49:19.446387325 +0000 UTC m=+3271.863405303" observedRunningTime="2026-02-17 16:49:20.082156577 +0000 UTC m=+3272.499174575" watchObservedRunningTime="2026-02-17 16:49:20.0963716 +0000 UTC m=+3272.513389598" Feb 17 16:49:22 crc kubenswrapper[4829]: E0217 16:49:22.281645 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:24 crc kubenswrapper[4829]: E0217 16:49:24.281981 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.162411 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.162515 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:25 crc kubenswrapper[4829]: I0217 16:49:25.221955 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:26 crc kubenswrapper[4829]: I0217 16:49:26.191860 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:26 crc kubenswrapper[4829]: I0217 16:49:26.241294 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.154489 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mg6dh" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" containerID="cri-o://44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" gracePeriod=2 Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.721110 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.801713 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.801966 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.802020 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") pod \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\" (UID: \"3f0f0f09-269b-4977-9cf6-5c5cb72ec856\") " Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.803142 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities" (OuterVolumeSpecName: "utilities") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.808973 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx" (OuterVolumeSpecName: "kube-api-access-tnfbx") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "kube-api-access-tnfbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.842985 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0f0f09-269b-4977-9cf6-5c5cb72ec856" (UID: "3f0f0f09-269b-4977-9cf6-5c5cb72ec856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905844 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905908 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4829]: I0217 16:49:28.905927 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfbx\" (UniqueName: \"kubernetes.io/projected/3f0f0f09-269b-4977-9cf6-5c5cb72ec856-kube-api-access-tnfbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168597 4829 generic.go:334] "Generic (PLEG): container finished" podID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" exitCode=0 Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168671 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mg6dh" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.168691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.169126 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mg6dh" event={"ID":"3f0f0f09-269b-4977-9cf6-5c5cb72ec856","Type":"ContainerDied","Data":"b0f4e9aeceebcf8cb08d563b4cc1f0bd60551e4b6fabf6f07540dcc2ec4d3d42"} Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.169144 4829 scope.go:117] "RemoveContainer" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.200813 4829 scope.go:117] "RemoveContainer" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.205245 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.225697 4829 scope.go:117] "RemoveContainer" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.227070 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mg6dh"] Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.293844 4829 scope.go:117] "RemoveContainer" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.295892 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": container with ID starting with 44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24 not found: ID does not exist" containerID="44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.295934 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24"} err="failed to get container status \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": rpc error: code = NotFound desc = could not find container \"44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24\": container with ID starting with 44b4467ed1ed982bdcb28b8a8585f3ff3ac7e12dae2599b89ed82695e88b3c24 not found: ID does not exist" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.295970 4829 scope.go:117] "RemoveContainer" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.296740 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": container with ID starting with bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7 not found: ID does not exist" containerID="bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.296786 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7"} err="failed to get container status \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": rpc error: code = NotFound desc = could not find container \"bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7\": container with ID starting with bcb849167a9d5eb1fa18b0675315d8324291522be494c63353c56e6503e98ea7 not found: ID does not exist" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.296815 4829 scope.go:117] "RemoveContainer" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: E0217 16:49:29.297181 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": container with ID starting with dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899 not found: ID does not exist" containerID="dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899" Feb 17 16:49:29 crc kubenswrapper[4829]: I0217 16:49:29.297227 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899"} err="failed to get container status \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": rpc error: code = NotFound desc = could not find container \"dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899\": container with ID starting with dddcc0a0df72abb1e195709fdbf975c99655f725ba1c32d9ca51c69cce0c6899 not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4829]: I0217 16:49:30.303864 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" path="/var/lib/kubelet/pods/3f0f0f09-269b-4977-9cf6-5c5cb72ec856/volumes" Feb 17 16:49:35 crc kubenswrapper[4829]: E0217 16:49:35.281848 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:49:35 crc kubenswrapper[4829]: E0217 16:49:35.281863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:48 crc kubenswrapper[4829]: E0217 16:49:48.292035 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:49:50 crc kubenswrapper[4829]: E0217 16:49:50.282682 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:00 crc kubenswrapper[4829]: E0217 16:50:00.281726 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:05 crc kubenswrapper[4829]: E0217 16:50:05.282685 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:15 crc kubenswrapper[4829]: E0217 16:50:15.291862 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:18 crc kubenswrapper[4829]: E0217 16:50:18.291012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:27 crc kubenswrapper[4829]: E0217 16:50:27.281702 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:29 crc kubenswrapper[4829]: E0217 16:50:29.284746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:39 crc kubenswrapper[4829]: E0217 16:50:39.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:50:42 crc kubenswrapper[4829]: E0217 16:50:42.282323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:52 crc kubenswrapper[4829]: I0217 16:50:52.424550 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:52 crc kubenswrapper[4829]: I0217 16:50:52.425059 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:50:54 crc kubenswrapper[4829]: E0217 16:50:54.283883 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:50:54 crc kubenswrapper[4829]: E0217 16:50:54.283910 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:05 crc kubenswrapper[4829]: E0217 16:51:05.282014 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:08 crc kubenswrapper[4829]: E0217 16:51:08.295324 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:17 crc kubenswrapper[4829]: E0217 16:51:17.281934 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:20 crc kubenswrapper[4829]: E0217 16:51:20.282297 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:22 crc kubenswrapper[4829]: I0217 16:51:22.424954 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:22 crc kubenswrapper[4829]: I0217 16:51:22.428823 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:31 crc kubenswrapper[4829]: E0217 16:51:31.281568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:32 crc kubenswrapper[4829]: E0217 16:51:32.281526 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:45 crc kubenswrapper[4829]: E0217 16:51:45.281757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:46 crc kubenswrapper[4829]: E0217 16:51:46.281414 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.424424 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.425122 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.425182 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.426300 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.426383 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" gracePeriod=600 Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.766522 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" exitCode=0 Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.767045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1"} Feb 17 16:51:52 crc kubenswrapper[4829]: I0217 16:51:52.767076 4829 scope.go:117] "RemoveContainer" containerID="41bd7e81a84b328a91c7aafa29615afdfb877fe593d5c26c2df39dac873b6322" Feb 17 16:51:53 crc kubenswrapper[4829]: I0217 16:51:53.778173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} Feb 17 16:51:56 crc kubenswrapper[4829]: E0217 16:51:56.282298 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.759218 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760537 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-utilities" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760557 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-utilities" Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760603 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-content" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760613 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="extract-content" Feb 17 16:51:59 crc kubenswrapper[4829]: E0217 16:51:59.760631 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.760962 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0f0f09-269b-4977-9cf6-5c5cb72ec856" containerName="registry-server" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.763207 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.769321 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.930671 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.931010 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:51:59 crc kubenswrapper[4829]: I0217 16:51:59.931080 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.032970 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033251 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033703 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.033791 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.055832 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"community-operators-mlm9r\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.088113 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.697387 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:00 crc kubenswrapper[4829]: W0217 16:52:00.719857 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60601378_20f1_4f29_a22b_0b6dfbc118a1.slice/crio-8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd WatchSource:0}: Error finding container 8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd: Status 404 returned error can't find the container with id 8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd Feb 17 16:52:00 crc kubenswrapper[4829]: I0217 16:52:00.853405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd"} Feb 17 16:52:01 crc kubenswrapper[4829]: E0217 16:52:01.280792 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.869949 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" exitCode=0 Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.870045 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e"} Feb 17 16:52:01 crc kubenswrapper[4829]: I0217 16:52:01.872963 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:52:02 crc kubenswrapper[4829]: I0217 16:52:02.884435 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} Feb 17 16:52:05 crc kubenswrapper[4829]: I0217 16:52:05.917485 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" exitCode=0 Feb 17 16:52:05 crc kubenswrapper[4829]: I0217 16:52:05.917559 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} Feb 17 16:52:07 crc kubenswrapper[4829]: I0217 16:52:07.939732 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerStarted","Data":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} Feb 17 16:52:07 crc kubenswrapper[4829]: I0217 16:52:07.968846 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mlm9r" podStartSLOduration=3.519773888 podStartE2EDuration="8.968829693s" podCreationTimestamp="2026-02-17 16:51:59 +0000 UTC" firstStartedPulling="2026-02-17 16:52:01.872694941 +0000 UTC m=+3434.289712919" lastFinishedPulling="2026-02-17 16:52:07.321750736 +0000 UTC m=+3439.738768724" observedRunningTime="2026-02-17 16:52:07.962186313 +0000 UTC m=+3440.379204301" watchObservedRunningTime="2026-02-17 16:52:07.968829693 +0000 UTC m=+3440.385847671" Feb 17 16:52:09 crc kubenswrapper[4829]: E0217 16:52:09.281379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:10 crc kubenswrapper[4829]: I0217 16:52:10.088381 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:10 crc kubenswrapper[4829]: I0217 16:52:10.088670 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:11 crc kubenswrapper[4829]: I0217 16:52:11.132679 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mlm9r" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" probeResult="failure" output=< Feb 17 16:52:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:52:11 crc kubenswrapper[4829]: > Feb 17 16:52:16 crc kubenswrapper[4829]: E0217 16:52:16.284141 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.143879 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.206400 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:20 crc kubenswrapper[4829]: I0217 16:52:20.384614 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:21 crc kubenswrapper[4829]: E0217 16:52:21.281688 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.094129 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mlm9r" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" containerID="cri-o://ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" gracePeriod=2 Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.609982 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741057 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741106 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.741242 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") pod \"60601378-20f1-4f29-a22b-0b6dfbc118a1\" (UID: \"60601378-20f1-4f29-a22b-0b6dfbc118a1\") " Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.743005 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities" (OuterVolumeSpecName: "utilities") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.763289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5" (OuterVolumeSpecName: "kube-api-access-6ssv5") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "kube-api-access-6ssv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.797057 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60601378-20f1-4f29-a22b-0b6dfbc118a1" (UID: "60601378-20f1-4f29-a22b-0b6dfbc118a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844919 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ssv5\" (UniqueName: \"kubernetes.io/projected/60601378-20f1-4f29-a22b-0b6dfbc118a1-kube-api-access-6ssv5\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844976 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:22 crc kubenswrapper[4829]: I0217 16:52:22.844990 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60601378-20f1-4f29-a22b-0b6dfbc118a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107063 4829 generic.go:334] "Generic (PLEG): container finished" podID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" exitCode=0 Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107438 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107491 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mlm9r" event={"ID":"60601378-20f1-4f29-a22b-0b6dfbc118a1","Type":"ContainerDied","Data":"8ef5f12ec3fa3bd03cf727fbd6b85e2366731072b00ce9a0cf0d6b300caa60dd"} Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107511 4829 scope.go:117] "RemoveContainer" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.107745 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mlm9r" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.130430 4829 scope.go:117] "RemoveContainer" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.161668 4829 scope.go:117] "RemoveContainer" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.168088 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.185148 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mlm9r"] Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.221765 4829 scope.go:117] "RemoveContainer" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.222294 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": container with ID starting with ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef not found: ID does not exist" containerID="ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222348 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef"} err="failed to get container status \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": rpc error: code = NotFound desc = could not find container \"ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef\": container with ID starting with ab2731689bc644ef2ee99655019c7f6c02bbd53bbf40fe53159900e2c64b0aef not found: ID does not exist" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222384 4829 scope.go:117] "RemoveContainer" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.222820 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": container with ID starting with e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949 not found: ID does not exist" containerID="e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222854 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949"} err="failed to get container status \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": rpc error: code = NotFound desc = could not find container \"e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949\": container with ID starting with e62019b2b9d7a742db0f464fee2353e390da3a940634ffae5e6b5e4cf6f06949 not found: ID does not exist" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.222875 4829 scope.go:117] "RemoveContainer" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: E0217 16:52:23.223135 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": container with ID starting with 9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e not found: ID does not exist" containerID="9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e" Feb 17 16:52:23 crc kubenswrapper[4829]: I0217 16:52:23.223166 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e"} err="failed to get container status \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": rpc error: code = NotFound desc = could not find container \"9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e\": container with ID starting with 9cd4cb3f9dae778659bb2bc68b1e69d99940d0a5d6b1b2eddb1a6b4ec5a2837e not found: ID does not exist" Feb 17 16:52:24 crc kubenswrapper[4829]: I0217 16:52:24.292809 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" path="/var/lib/kubelet/pods/60601378-20f1-4f29-a22b-0b6dfbc118a1/volumes" Feb 17 16:52:28 crc kubenswrapper[4829]: E0217 16:52:28.289766 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:33 crc kubenswrapper[4829]: E0217 16:52:33.283793 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:41 crc kubenswrapper[4829]: E0217 16:52:41.282516 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:47 crc kubenswrapper[4829]: E0217 16:52:47.282445 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.475365 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476436 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-utilities" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476449 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-utilities" Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476471 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476478 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: E0217 16:52:50.476501 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-content" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476509 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="extract-content" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.476780 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="60601378-20f1-4f29-a22b-0b6dfbc118a1" containerName="registry-server" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.478710 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.492226 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639005 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639315 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.639465 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.742955 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743491 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743626 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.743499 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.744079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.765263 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"redhat-operators-r9mgp\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:50 crc kubenswrapper[4829]: I0217 16:52:50.798924 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:52:51 crc kubenswrapper[4829]: I0217 16:52:51.381196 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:52:51 crc kubenswrapper[4829]: I0217 16:52:51.402493 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"c60a853849686ab53590661fb47e340fcb448a03febd0f524b02caaf02879b53"} Feb 17 16:52:52 crc kubenswrapper[4829]: I0217 16:52:52.422385 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" exitCode=0 Feb 17 16:52:52 crc kubenswrapper[4829]: I0217 16:52:52.422569 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9"} Feb 17 16:52:53 crc kubenswrapper[4829]: I0217 16:52:53.434553 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} Feb 17 16:52:56 crc kubenswrapper[4829]: E0217 16:52:56.296422 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:52:58 crc kubenswrapper[4829]: I0217 16:52:58.484423 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" exitCode=0 Feb 17 16:52:58 crc kubenswrapper[4829]: I0217 16:52:58.484480 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} Feb 17 16:52:59 crc kubenswrapper[4829]: I0217 16:52:59.496563 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerStarted","Data":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} Feb 17 16:52:59 crc kubenswrapper[4829]: I0217 16:52:59.516785 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9mgp" podStartSLOduration=3.001427853 podStartE2EDuration="9.516769054s" podCreationTimestamp="2026-02-17 16:52:50 +0000 UTC" firstStartedPulling="2026-02-17 16:52:52.428647578 +0000 UTC m=+3484.845665546" lastFinishedPulling="2026-02-17 16:52:58.943988769 +0000 UTC m=+3491.361006747" observedRunningTime="2026-02-17 16:52:59.514643086 +0000 UTC m=+3491.931661084" watchObservedRunningTime="2026-02-17 16:52:59.516769054 +0000 UTC m=+3491.933787032" Feb 17 16:53:00 crc kubenswrapper[4829]: E0217 16:53:00.281385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:00 crc kubenswrapper[4829]: I0217 16:53:00.800339 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:00 crc kubenswrapper[4829]: I0217 16:53:00.803078 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:01 crc kubenswrapper[4829]: I0217 16:53:01.850951 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" probeResult="failure" output=< Feb 17 16:53:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:53:01 crc kubenswrapper[4829]: > Feb 17 16:53:09 crc kubenswrapper[4829]: E0217 16:53:09.285897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:11 crc kubenswrapper[4829]: I0217 16:53:11.845792 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" probeResult="failure" output=< Feb 17 16:53:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 16:53:11 crc kubenswrapper[4829]: > Feb 17 16:53:12 crc kubenswrapper[4829]: E0217 16:53:12.282870 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:20 crc kubenswrapper[4829]: E0217 16:53:20.281774 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:20 crc kubenswrapper[4829]: I0217 16:53:20.852410 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:20 crc kubenswrapper[4829]: I0217 16:53:20.924327 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:21 crc kubenswrapper[4829]: I0217 16:53:21.671852 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:22 crc kubenswrapper[4829]: I0217 16:53:22.746383 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9mgp" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" containerID="cri-o://8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" gracePeriod=2 Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.255957 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308244 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308508 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.308672 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") pod \"999f5a65-e45a-4014-a208-9bfe09f453b3\" (UID: \"999f5a65-e45a-4014-a208-9bfe09f453b3\") " Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.315704 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities" (OuterVolumeSpecName: "utilities") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.329416 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd" (OuterVolumeSpecName: "kube-api-access-8zjqd") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "kube-api-access-8zjqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.411814 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.411847 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zjqd\" (UniqueName: \"kubernetes.io/projected/999f5a65-e45a-4014-a208-9bfe09f453b3-kube-api-access-8zjqd\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.454442 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "999f5a65-e45a-4014-a208-9bfe09f453b3" (UID: "999f5a65-e45a-4014-a208-9bfe09f453b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.513396 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/999f5a65-e45a-4014-a208-9bfe09f453b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760345 4829 generic.go:334] "Generic (PLEG): container finished" podID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" exitCode=0 Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760404 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760449 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9mgp" event={"ID":"999f5a65-e45a-4014-a208-9bfe09f453b3","Type":"ContainerDied","Data":"c60a853849686ab53590661fb47e340fcb448a03febd0f524b02caaf02879b53"} Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760478 4829 scope.go:117] "RemoveContainer" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.760517 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9mgp" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.784940 4829 scope.go:117] "RemoveContainer" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.825077 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.836100 4829 scope.go:117] "RemoveContainer" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.841836 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9mgp"] Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.880648 4829 scope.go:117] "RemoveContainer" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.881259 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": container with ID starting with 8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6 not found: ID does not exist" containerID="8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881292 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6"} err="failed to get container status \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": rpc error: code = NotFound desc = could not find container \"8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6\": container with ID starting with 8c6e029a5aa76b197e6a418d5f9e599dbc51be24809ee117352ee21380df96a6 not found: ID does not exist" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881316 4829 scope.go:117] "RemoveContainer" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.881714 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": container with ID starting with b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d not found: ID does not exist" containerID="b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881742 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d"} err="failed to get container status \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": rpc error: code = NotFound desc = could not find container \"b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d\": container with ID starting with b823ef641e61a152b730ce56ed2f9a5735b4633b73a9c1b7699f4075e41e307d not found: ID does not exist" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.881762 4829 scope.go:117] "RemoveContainer" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: E0217 16:53:23.882087 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": container with ID starting with 3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9 not found: ID does not exist" containerID="3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9" Feb 17 16:53:23 crc kubenswrapper[4829]: I0217 16:53:23.882113 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9"} err="failed to get container status \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": rpc error: code = NotFound desc = could not find container \"3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9\": container with ID starting with 3c9c0c7e2ea84ea7db3b8f8840e350f78bb41a91514b5124daed89fe9df316c9 not found: ID does not exist" Feb 17 16:53:24 crc kubenswrapper[4829]: E0217 16:53:24.281177 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.302317 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" path="/var/lib/kubelet/pods/999f5a65-e45a-4014-a208-9bfe09f453b3/volumes" Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.773992 4829 generic.go:334] "Generic (PLEG): container finished" podID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerID="2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946" exitCode=2 Feb 17 16:53:24 crc kubenswrapper[4829]: I0217 16:53:24.774104 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerDied","Data":"2bb42acc71e341fc9a4522365d43b12b36609f3846ab12d177cb109e9f8c1946"} Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.259106 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.403793 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.404725 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.404953 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") pod \"9a6550f4-cdf2-4365-8ce4-96642f12822f\" (UID: \"9a6550f4-cdf2-4365-8ce4-96642f12822f\") " Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.412801 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq" (OuterVolumeSpecName: "kube-api-access-kshsq") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "kube-api-access-kshsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.435912 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.436249 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory" (OuterVolumeSpecName: "inventory") pod "9a6550f4-cdf2-4365-8ce4-96642f12822f" (UID: "9a6550f4-cdf2-4365-8ce4-96642f12822f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509539 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509597 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6550f4-cdf2-4365-8ce4-96642f12822f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.509610 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kshsq\" (UniqueName: \"kubernetes.io/projected/9a6550f4-cdf2-4365-8ce4-96642f12822f-kube-api-access-kshsq\") on node \"crc\" DevicePath \"\"" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796498 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" event={"ID":"9a6550f4-cdf2-4365-8ce4-96642f12822f","Type":"ContainerDied","Data":"98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083"} Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796851 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98768e8c01313de918fca3faf0c5b385d4775bf61c51042946bdc072c4706083" Feb 17 16:53:26 crc kubenswrapper[4829]: I0217 16:53:26.796592 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.294132 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418360 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418452 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.418747 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:35 crc kubenswrapper[4829]: E0217 16:53:35.420059 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:46 crc kubenswrapper[4829]: E0217 16:53:46.284492 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.421320 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.421973 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.422175 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:48 crc kubenswrapper[4829]: E0217 16:53:48.423444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:53:52 crc kubenswrapper[4829]: I0217 16:53:52.424787 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:53:52 crc kubenswrapper[4829]: I0217 16:53:52.425410 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:53:57 crc kubenswrapper[4829]: E0217 16:53:57.285321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:03 crc kubenswrapper[4829]: E0217 16:54:03.281546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:10 crc kubenswrapper[4829]: E0217 16:54:10.283444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:15 crc kubenswrapper[4829]: E0217 16:54:15.281997 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:22 crc kubenswrapper[4829]: I0217 16:54:22.424878 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:22 crc kubenswrapper[4829]: I0217 16:54:22.425417 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:24 crc kubenswrapper[4829]: E0217 16:54:24.283261 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:27 crc kubenswrapper[4829]: E0217 16:54:27.282406 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:37 crc kubenswrapper[4829]: E0217 16:54:37.281165 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:38 crc kubenswrapper[4829]: E0217 16:54:38.289625 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.036772 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037852 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037872 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037916 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-content" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037923 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-content" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037936 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-utilities" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037943 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="extract-utilities" Feb 17 16:54:44 crc kubenswrapper[4829]: E0217 16:54:44.037974 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.037979 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.038181 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6550f4-cdf2-4365-8ce4-96642f12822f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.038198 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="999f5a65-e45a-4014-a208-9bfe09f453b3" containerName="registry-server" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.039074 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042071 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042135 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.042179 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.043393 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.055126 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.182781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.182921 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.183020 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.286534 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.287066 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.287330 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.295911 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.300139 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.310913 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-v8r24\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.363691 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 16:54:44 crc kubenswrapper[4829]: I0217 16:54:44.973566 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24"] Feb 17 16:54:45 crc kubenswrapper[4829]: I0217 16:54:45.700835 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerStarted","Data":"d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560"} Feb 17 16:54:46 crc kubenswrapper[4829]: I0217 16:54:46.712160 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerStarted","Data":"f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187"} Feb 17 16:54:49 crc kubenswrapper[4829]: E0217 16:54:49.288664 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.281459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.424738 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.425076 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.425125 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.426152 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.426243 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" gracePeriod=600 Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.583721 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772122 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" exitCode=0 Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60"} Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.772234 4829 scope.go:117] "RemoveContainer" containerID="a30df7202a42be74f3315f816fd110335994045832023cc2d9031eaaeeba09c1" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.773369 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:54:52 crc kubenswrapper[4829]: E0217 16:54:52.773903 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:54:52 crc kubenswrapper[4829]: I0217 16:54:52.799463 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" podStartSLOduration=8.197973639 podStartE2EDuration="8.799442291s" podCreationTimestamp="2026-02-17 16:54:44 +0000 UTC" firstStartedPulling="2026-02-17 16:54:44.987612334 +0000 UTC m=+3597.404630312" lastFinishedPulling="2026-02-17 16:54:45.589080986 +0000 UTC m=+3598.006098964" observedRunningTime="2026-02-17 16:54:46.751679347 +0000 UTC m=+3599.168697335" watchObservedRunningTime="2026-02-17 16:54:52.799442291 +0000 UTC m=+3605.216460269" Feb 17 16:55:04 crc kubenswrapper[4829]: E0217 16:55:04.281520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:05 crc kubenswrapper[4829]: I0217 16:55:05.280318 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:05 crc kubenswrapper[4829]: E0217 16:55:05.280922 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:05 crc kubenswrapper[4829]: E0217 16:55:05.283381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:15 crc kubenswrapper[4829]: E0217 16:55:15.281612 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:17 crc kubenswrapper[4829]: E0217 16:55:17.283308 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:18 crc kubenswrapper[4829]: I0217 16:55:18.291483 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:18 crc kubenswrapper[4829]: E0217 16:55:18.292000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:26 crc kubenswrapper[4829]: E0217 16:55:26.283293 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:32 crc kubenswrapper[4829]: E0217 16:55:32.281859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:33 crc kubenswrapper[4829]: I0217 16:55:33.280223 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:33 crc kubenswrapper[4829]: E0217 16:55:33.280866 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:39 crc kubenswrapper[4829]: E0217 16:55:39.281850 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:45 crc kubenswrapper[4829]: I0217 16:55:45.280063 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:45 crc kubenswrapper[4829]: E0217 16:55:45.281222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:45 crc kubenswrapper[4829]: E0217 16:55:45.281613 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:55:54 crc kubenswrapper[4829]: E0217 16:55:54.281526 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:55:57 crc kubenswrapper[4829]: I0217 16:55:57.280068 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:55:57 crc kubenswrapper[4829]: E0217 16:55:57.280987 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:55:57 crc kubenswrapper[4829]: E0217 16:55:57.283759 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.289427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.289520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:08 crc kubenswrapper[4829]: I0217 16:56:08.288667 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:08 crc kubenswrapper[4829]: E0217 16:56:08.291710 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:20 crc kubenswrapper[4829]: E0217 16:56:20.281524 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:21 crc kubenswrapper[4829]: E0217 16:56:21.281884 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:23 crc kubenswrapper[4829]: I0217 16:56:23.279656 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:23 crc kubenswrapper[4829]: E0217 16:56:23.280285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:34 crc kubenswrapper[4829]: I0217 16:56:34.280293 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:34 crc kubenswrapper[4829]: E0217 16:56:34.281170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:35 crc kubenswrapper[4829]: E0217 16:56:35.282328 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:36 crc kubenswrapper[4829]: E0217 16:56:36.281909 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:47 crc kubenswrapper[4829]: I0217 16:56:47.280119 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:47 crc kubenswrapper[4829]: E0217 16:56:47.281141 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:56:47 crc kubenswrapper[4829]: E0217 16:56:47.283072 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:56:51 crc kubenswrapper[4829]: E0217 16:56:51.282681 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:56:58 crc kubenswrapper[4829]: I0217 16:56:58.282769 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:56:58 crc kubenswrapper[4829]: E0217 16:56:58.284175 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:00 crc kubenswrapper[4829]: E0217 16:57:00.281719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:03 crc kubenswrapper[4829]: E0217 16:57:03.281918 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:11 crc kubenswrapper[4829]: E0217 16:57:11.282381 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:13 crc kubenswrapper[4829]: I0217 16:57:13.279680 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:13 crc kubenswrapper[4829]: E0217 16:57:13.280263 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:15 crc kubenswrapper[4829]: E0217 16:57:15.283978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:23 crc kubenswrapper[4829]: E0217 16:57:23.281432 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:27 crc kubenswrapper[4829]: I0217 16:57:27.280524 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:27 crc kubenswrapper[4829]: E0217 16:57:27.281424 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:29 crc kubenswrapper[4829]: E0217 16:57:29.294416 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:38 crc kubenswrapper[4829]: E0217 16:57:38.291040 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:40 crc kubenswrapper[4829]: I0217 16:57:40.280309 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:40 crc kubenswrapper[4829]: E0217 16:57:40.280897 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:57:40 crc kubenswrapper[4829]: E0217 16:57:40.283130 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:50 crc kubenswrapper[4829]: E0217 16:57:50.285374 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:57:54 crc kubenswrapper[4829]: E0217 16:57:54.281859 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:57:55 crc kubenswrapper[4829]: I0217 16:57:55.280199 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:57:55 crc kubenswrapper[4829]: E0217 16:57:55.280608 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:05 crc kubenswrapper[4829]: E0217 16:58:05.282450 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:07 crc kubenswrapper[4829]: I0217 16:58:07.279874 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:07 crc kubenswrapper[4829]: E0217 16:58:07.280763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:09 crc kubenswrapper[4829]: E0217 16:58:09.287019 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:19 crc kubenswrapper[4829]: E0217 16:58:19.282154 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:22 crc kubenswrapper[4829]: I0217 16:58:22.280441 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:22 crc kubenswrapper[4829]: E0217 16:58:22.281295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:23 crc kubenswrapper[4829]: E0217 16:58:23.281506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:34 crc kubenswrapper[4829]: I0217 16:58:34.281146 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.282079 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.283038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:34 crc kubenswrapper[4829]: E0217 16:58:34.283042 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:45 crc kubenswrapper[4829]: I0217 16:58:45.283559 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406004 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406077 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.406222 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:58:45 crc kubenswrapper[4829]: E0217 16:58:45.407444 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:58:47 crc kubenswrapper[4829]: E0217 16:58:47.280932 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:58:49 crc kubenswrapper[4829]: I0217 16:58:49.279410 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:58:49 crc kubenswrapper[4829]: E0217 16:58:49.280351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:00 crc kubenswrapper[4829]: E0217 16:59:00.281629 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:01 crc kubenswrapper[4829]: I0217 16:59:01.279867 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.280450 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.403765 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.403860 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.404030 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:59:01 crc kubenswrapper[4829]: E0217 16:59:01.405829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:13 crc kubenswrapper[4829]: I0217 16:59:13.279969 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:13 crc kubenswrapper[4829]: E0217 16:59:13.280889 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:15 crc kubenswrapper[4829]: E0217 16:59:15.282770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:17 crc kubenswrapper[4829]: E0217 16:59:17.285806 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:25 crc kubenswrapper[4829]: I0217 16:59:25.280506 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:25 crc kubenswrapper[4829]: E0217 16:59:25.283554 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:27 crc kubenswrapper[4829]: E0217 16:59:27.282505 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:31 crc kubenswrapper[4829]: E0217 16:59:31.283380 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:40 crc kubenswrapper[4829]: I0217 16:59:40.280056 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:40 crc kubenswrapper[4829]: E0217 16:59:40.281119 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:42 crc kubenswrapper[4829]: E0217 16:59:42.282334 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:42 crc kubenswrapper[4829]: E0217 16:59:42.282340 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 16:59:51 crc kubenswrapper[4829]: I0217 16:59:51.280256 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 16:59:51 crc kubenswrapper[4829]: E0217 16:59:51.281095 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 16:59:56 crc kubenswrapper[4829]: E0217 16:59:56.281466 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 16:59:56 crc kubenswrapper[4829]: E0217 16:59:56.281702 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.173687 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.188668 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.192413 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.211704 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.212358 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.275887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.276322 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.276429 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378606 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378727 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.378960 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.380001 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.385209 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.396437 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"collect-profiles-29522460-t4bl2\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:00 crc kubenswrapper[4829]: I0217 17:00:00.544454 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:01 crc kubenswrapper[4829]: I0217 17:00:01.721965 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2"] Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.444822 4829 generic.go:334] "Generic (PLEG): container finished" podID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerID="f4cc6704b8d4cbb9f1474dc2f06edf348ff52dc93162fe645a65a1daf1e5eefe" exitCode=0 Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.444954 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerDied","Data":"f4cc6704b8d4cbb9f1474dc2f06edf348ff52dc93162fe645a65a1daf1e5eefe"} Feb 17 17:00:02 crc kubenswrapper[4829]: I0217 17:00:02.445484 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerStarted","Data":"13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef"} Feb 17 17:00:03 crc kubenswrapper[4829]: I0217 17:00:03.279539 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.018994 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069324 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069554 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.069788 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") pod \"fb72479a-1a41-4fc5-8645-6f9486b59440\" (UID: \"fb72479a-1a41-4fc5-8645-6f9486b59440\") " Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.077239 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.078524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.100267 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65" (OuterVolumeSpecName: "kube-api-access-kqt65") pod "fb72479a-1a41-4fc5-8645-6f9486b59440" (UID: "fb72479a-1a41-4fc5-8645-6f9486b59440"). InnerVolumeSpecName "kube-api-access-kqt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172540 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb72479a-1a41-4fc5-8645-6f9486b59440-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172607 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb72479a-1a41-4fc5-8645-6f9486b59440-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.172623 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqt65\" (UniqueName: \"kubernetes.io/projected/fb72479a-1a41-4fc5-8645-6f9486b59440-kube-api-access-kqt65\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470425 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" event={"ID":"fb72479a-1a41-4fc5-8645-6f9486b59440","Type":"ContainerDied","Data":"13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef"} Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470751 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13a93f169c740e973001beb378dcddde653a67761f56ff107e63408a19a5c4ef" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.470480 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-t4bl2" Feb 17 17:00:04 crc kubenswrapper[4829]: I0217 17:00:04.476345 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} Feb 17 17:00:05 crc kubenswrapper[4829]: I0217 17:00:05.121490 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 17:00:05 crc kubenswrapper[4829]: I0217 17:00:05.133547 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-vfscd"] Feb 17 17:00:06 crc kubenswrapper[4829]: I0217 17:00:06.294600 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88fd8a6-9c2a-4529-81eb-5495aa3237c8" path="/var/lib/kubelet/pods/b88fd8a6-9c2a-4529-81eb-5495aa3237c8/volumes" Feb 17 17:00:09 crc kubenswrapper[4829]: E0217 17:00:09.285000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:10 crc kubenswrapper[4829]: E0217 17:00:10.282563 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:19 crc kubenswrapper[4829]: I0217 17:00:19.331423 4829 scope.go:117] "RemoveContainer" containerID="595452ee9af205895c925b359bc7ec7b896bb997533c43e394c83271b0886d7c" Feb 17 17:00:21 crc kubenswrapper[4829]: E0217 17:00:21.281816 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:23 crc kubenswrapper[4829]: E0217 17:00:23.281709 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:35 crc kubenswrapper[4829]: E0217 17:00:35.282218 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:36 crc kubenswrapper[4829]: E0217 17:00:36.282950 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:48 crc kubenswrapper[4829]: E0217 17:00:48.290933 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:00:48 crc kubenswrapper[4829]: E0217 17:00:48.290970 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:00:59 crc kubenswrapper[4829]: E0217 17:00:59.281739 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.040729 4829 generic.go:334] "Generic (PLEG): container finished" podID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerID="f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187" exitCode=2 Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.040862 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerDied","Data":"f7e8f6814ad4098f90a9a31c99fb7220bb9dd0337ff04b9caf3ec6a341209187"} Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.152598 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:00 crc kubenswrapper[4829]: E0217 17:01:00.153164 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.153183 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.153458 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb72479a-1a41-4fc5-8645-6f9486b59440" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.154476 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.166801 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167620 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167716 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167748 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.167774 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270045 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270174 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270211 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.270241 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.277471 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.283731 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.283838 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.288556 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"keystone-cron-29522461-jp96w\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.488153 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:00 crc kubenswrapper[4829]: I0217 17:01:00.984129 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-jp96w"] Feb 17 17:01:01 crc kubenswrapper[4829]: I0217 17:01:01.054746 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerStarted","Data":"25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.046194 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092384 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" event={"ID":"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86","Type":"ContainerDied","Data":"d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092442 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d037b26ff2392f9827001ce1508a80893f4c0f752546e5eaba713d273b00d560" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.092612 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-v8r24" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.099684 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerStarted","Data":"5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227"} Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.191794 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-jp96w" podStartSLOduration=2.191773867 podStartE2EDuration="2.191773867s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:02.125297777 +0000 UTC m=+3974.542315775" watchObservedRunningTime="2026-02-17 17:01:02.191773867 +0000 UTC m=+3974.608791845" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.228925 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.232746 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.232857 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") pod \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\" (UID: \"6a1c73d0-1366-47dc-9726-b2a5d6ed3b86\") " Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.246341 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c" (OuterVolumeSpecName: "kube-api-access-wln6c") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "kube-api-access-wln6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.275205 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: E0217 17:01:02.281877 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.335906 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln6c\" (UniqueName: \"kubernetes.io/projected/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-kube-api-access-wln6c\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.335949 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.367611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory" (OuterVolumeSpecName: "inventory") pod "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" (UID: "6a1c73d0-1366-47dc-9726-b2a5d6ed3b86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:02 crc kubenswrapper[4829]: I0217 17:01:02.437162 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a1c73d0-1366-47dc-9726-b2a5d6ed3b86-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:06 crc kubenswrapper[4829]: I0217 17:01:06.151275 4829 generic.go:334] "Generic (PLEG): container finished" podID="7522621b-701f-4bef-8232-25fb5b8abab1" containerID="5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227" exitCode=0 Feb 17 17:01:06 crc kubenswrapper[4829]: I0217 17:01:06.151329 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerDied","Data":"5169d8a2e5333f77ae7a66f2dcae582d7e26e7b0c90b909e482457d3aae33227"} Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.747157 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876465 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876698 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876780 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.876817 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") pod \"7522621b-701f-4bef-8232-25fb5b8abab1\" (UID: \"7522621b-701f-4bef-8232-25fb5b8abab1\") " Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.881967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx" (OuterVolumeSpecName: "kube-api-access-fmxhx") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "kube-api-access-fmxhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.882417 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.913932 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.957688 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data" (OuterVolumeSpecName: "config-data") pod "7522621b-701f-4bef-8232-25fb5b8abab1" (UID: "7522621b-701f-4bef-8232-25fb5b8abab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980125 4829 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980169 4829 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980187 4829 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7522621b-701f-4bef-8232-25fb5b8abab1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:07 crc kubenswrapper[4829]: I0217 17:01:07.980201 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmxhx\" (UniqueName: \"kubernetes.io/projected/7522621b-701f-4bef-8232-25fb5b8abab1-kube-api-access-fmxhx\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178072 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-jp96w" event={"ID":"7522621b-701f-4bef-8232-25fb5b8abab1","Type":"ContainerDied","Data":"25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48"} Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178112 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25394e4451b91ee03f5efc996a2fedf22215fcf5b31d01da9e4667cea00e8c48" Feb 17 17:01:08 crc kubenswrapper[4829]: I0217 17:01:08.178126 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-jp96w" Feb 17 17:01:13 crc kubenswrapper[4829]: E0217 17:01:13.281446 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:14 crc kubenswrapper[4829]: E0217 17:01:14.281349 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:25 crc kubenswrapper[4829]: E0217 17:01:25.281563 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:28 crc kubenswrapper[4829]: E0217 17:01:28.301984 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:36 crc kubenswrapper[4829]: E0217 17:01:36.281721 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:40 crc kubenswrapper[4829]: E0217 17:01:40.282270 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:01:50 crc kubenswrapper[4829]: E0217 17:01:50.282953 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:01:52 crc kubenswrapper[4829]: E0217 17:01:52.281729 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:04 crc kubenswrapper[4829]: E0217 17:02:04.281456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:06 crc kubenswrapper[4829]: E0217 17:02:06.281284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:18 crc kubenswrapper[4829]: E0217 17:02:18.289841 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:21 crc kubenswrapper[4829]: E0217 17:02:21.282771 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:22 crc kubenswrapper[4829]: I0217 17:02:22.424183 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:22 crc kubenswrapper[4829]: I0217 17:02:22.424526 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:30 crc kubenswrapper[4829]: E0217 17:02:30.281867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:32 crc kubenswrapper[4829]: E0217 17:02:32.281827 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:41 crc kubenswrapper[4829]: E0217 17:02:41.282289 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.718400 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:46 crc kubenswrapper[4829]: E0217 17:02:46.719649 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719670 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: E0217 17:02:46.719695 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719703 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.719998 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1c73d0-1366-47dc-9726-b2a5d6ed3b86" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.720028 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7522621b-701f-4bef-8232-25fb5b8abab1" containerName="keystone-cron" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.722143 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.733443 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871132 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871196 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.871380 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.922519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.925405 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.953023 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974079 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974154 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974416 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.974955 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.975079 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:46 crc kubenswrapper[4829]: I0217 17:02:46.998641 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"certified-operators-fdlcf\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.052932 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076272 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076560 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.076708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.178770 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179131 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179171 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179322 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.179760 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.208133 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"redhat-marketplace-8ngc2\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.243839 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:47 crc kubenswrapper[4829]: E0217 17:02:47.293653 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.684517 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:02:47 crc kubenswrapper[4829]: I0217 17:02:47.877387 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:02:48 crc kubenswrapper[4829]: E0217 17:02:48.105904 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8d01ff_56bf_4c0c_b23a_f1d39897a1e1.slice/crio-c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:02:48 crc kubenswrapper[4829]: E0217 17:02:48.119162 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8d01ff_56bf_4c0c_b23a_f1d39897a1e1.slice/crio-c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.569735 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" exitCode=0 Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.570087 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.570112 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"77f0002caeed3f047c6b9dac29f1d93c8de39b8b4df63faa2366affd8529c82d"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574357 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b" exitCode=0 Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574395 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b"} Feb 17 17:02:48 crc kubenswrapper[4829]: I0217 17:02:48.574418 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"a1fc745f1370e4a89f0f709e3665185a42b6cc92ee32738d4cc7001b5ecbd3de"} Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.588702 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7"} Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.921135 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.925693 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:49 crc kubenswrapper[4829]: I0217 17:02:49.932786 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073256 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073333 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.073404 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175490 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175557 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.175617 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.176086 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.176129 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.199312 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"community-operators-ppp9d\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.253911 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.603681 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} Feb 17 17:02:50 crc kubenswrapper[4829]: I0217 17:02:50.817945 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:02:50 crc kubenswrapper[4829]: W0217 17:02:50.824834 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4f1019_63ed_4b36_93b0_5cb66837ec84.slice/crio-bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569 WatchSource:0}: Error finding container bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569: Status 404 returned error can't find the container with id bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569 Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.620634 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" exitCode=0 Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.620819 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810"} Feb 17 17:02:51 crc kubenswrapper[4829]: I0217 17:02:51.621232 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569"} Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.425160 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.425214 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.635513 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" exitCode=0 Feb 17 17:02:52 crc kubenswrapper[4829]: I0217 17:02:52.635604 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} Feb 17 17:02:53 crc kubenswrapper[4829]: I0217 17:02:53.656294 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} Feb 17 17:02:54 crc kubenswrapper[4829]: E0217 17:02:54.280497 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.667310 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7" exitCode=0 Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.667367 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7"} Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.671918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerStarted","Data":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} Feb 17 17:02:54 crc kubenswrapper[4829]: I0217 17:02:54.711027 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8ngc2" podStartSLOduration=4.239719105 podStartE2EDuration="8.711000223s" podCreationTimestamp="2026-02-17 17:02:46 +0000 UTC" firstStartedPulling="2026-02-17 17:02:48.572266701 +0000 UTC m=+4080.989284679" lastFinishedPulling="2026-02-17 17:02:53.043547819 +0000 UTC m=+4085.460565797" observedRunningTime="2026-02-17 17:02:54.70388226 +0000 UTC m=+4087.120900258" watchObservedRunningTime="2026-02-17 17:02:54.711000223 +0000 UTC m=+4087.128018221" Feb 17 17:02:55 crc kubenswrapper[4829]: I0217 17:02:55.683857 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" exitCode=0 Feb 17 17:02:55 crc kubenswrapper[4829]: I0217 17:02:55.683905 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.699328 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerStarted","Data":"5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.703387 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerStarted","Data":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.730085 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdlcf" podStartSLOduration=4.131058829 podStartE2EDuration="10.730065865s" podCreationTimestamp="2026-02-17 17:02:46 +0000 UTC" firstStartedPulling="2026-02-17 17:02:48.576226618 +0000 UTC m=+4080.993244596" lastFinishedPulling="2026-02-17 17:02:55.175233654 +0000 UTC m=+4087.592251632" observedRunningTime="2026-02-17 17:02:56.725893893 +0000 UTC m=+4089.142911871" watchObservedRunningTime="2026-02-17 17:02:56.730065865 +0000 UTC m=+4089.147083843" Feb 17 17:02:56 crc kubenswrapper[4829]: I0217 17:02:56.750788 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ppp9d" podStartSLOduration=3.210497345 podStartE2EDuration="7.750771812s" podCreationTimestamp="2026-02-17 17:02:49 +0000 UTC" firstStartedPulling="2026-02-17 17:02:51.623468657 +0000 UTC m=+4084.040486635" lastFinishedPulling="2026-02-17 17:02:56.163743124 +0000 UTC m=+4088.580761102" observedRunningTime="2026-02-17 17:02:56.746538708 +0000 UTC m=+4089.163556686" watchObservedRunningTime="2026-02-17 17:02:56.750771812 +0000 UTC m=+4089.167789790" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.053833 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.053893 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.245520 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:57 crc kubenswrapper[4829]: I0217 17:02:57.245904 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:02:58 crc kubenswrapper[4829]: I0217 17:02:58.106873 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:02:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:02:58 crc kubenswrapper[4829]: > Feb 17 17:02:58 crc kubenswrapper[4829]: I0217 17:02:58.304458 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8ngc2" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" probeResult="failure" output=< Feb 17 17:02:58 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:02:58 crc kubenswrapper[4829]: > Feb 17 17:03:00 crc kubenswrapper[4829]: I0217 17:03:00.254443 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:00 crc kubenswrapper[4829]: I0217 17:03:00.255774 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:01 crc kubenswrapper[4829]: E0217 17:03:01.281127 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:01 crc kubenswrapper[4829]: I0217 17:03:01.306898 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:01 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:01 crc kubenswrapper[4829]: > Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.534519 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.537797 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.561711 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727128 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727343 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.727403 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829221 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829377 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.829412 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.830248 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.830342 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.850702 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"redhat-operators-9x86t\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:02 crc kubenswrapper[4829]: I0217 17:03:02.919498 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:03 crc kubenswrapper[4829]: I0217 17:03:03.512349 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:03 crc kubenswrapper[4829]: I0217 17:03:03.769982 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"b59ad539cf0ce290be53944b90ddaf1e58595f42c17f1d94728410f8fddfbe67"} Feb 17 17:03:04 crc kubenswrapper[4829]: I0217 17:03:04.782239 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" exitCode=0 Feb 17 17:03:04 crc kubenswrapper[4829]: I0217 17:03:04.782340 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc"} Feb 17 17:03:05 crc kubenswrapper[4829]: I0217 17:03:05.795191 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} Feb 17 17:03:06 crc kubenswrapper[4829]: E0217 17:03:06.283600 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:07 crc kubenswrapper[4829]: I0217 17:03:07.331514 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:07 crc kubenswrapper[4829]: I0217 17:03:07.379708 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.322991 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.328609 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:08 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:08 crc kubenswrapper[4829]: > Feb 17 17:03:08 crc kubenswrapper[4829]: I0217 17:03:08.823471 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8ngc2" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" containerID="cri-o://ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" gracePeriod=2 Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.756201 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.834313 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.835189 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.835361 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") pod \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\" (UID: \"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1\") " Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.836106 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities" (OuterVolumeSpecName: "utilities") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.836954 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845097 4829 generic.go:334] "Generic (PLEG): container finished" podID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" exitCode=0 Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845178 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845227 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ngc2" event={"ID":"1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1","Type":"ContainerDied","Data":"77f0002caeed3f047c6b9dac29f1d93c8de39b8b4df63faa2366affd8529c82d"} Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845251 4829 scope.go:117] "RemoveContainer" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.845455 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ngc2" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.856658 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh" (OuterVolumeSpecName: "kube-api-access-pmlnh") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "kube-api-access-pmlnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.871024 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" (UID: "1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.936123 4829 scope.go:117] "RemoveContainer" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.939650 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.939694 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlnh\" (UniqueName: \"kubernetes.io/projected/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1-kube-api-access-pmlnh\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:09 crc kubenswrapper[4829]: I0217 17:03:09.963965 4829 scope.go:117] "RemoveContainer" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051038 4829 scope.go:117] "RemoveContainer" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.051606 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": container with ID starting with ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d not found: ID does not exist" containerID="ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051669 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d"} err="failed to get container status \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": rpc error: code = NotFound desc = could not find container \"ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d\": container with ID starting with ce5639d432a92d20133379be67752f2ba319a861a2ac0d3e1c74d98bd45e280d not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.051706 4829 scope.go:117] "RemoveContainer" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.052408 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": container with ID starting with c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4 not found: ID does not exist" containerID="c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052441 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4"} err="failed to get container status \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": rpc error: code = NotFound desc = could not find container \"c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4\": container with ID starting with c31373870ddc72b654487cad273132ba81b09fb9f6652290e5acb22587a3e8e4 not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052479 4829 scope.go:117] "RemoveContainer" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: E0217 17:03:10.052776 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": container with ID starting with c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b not found: ID does not exist" containerID="c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.052830 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b"} err="failed to get container status \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": rpc error: code = NotFound desc = could not find container \"c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b\": container with ID starting with c75203a11cb94ceda4134c2fa943ca32be9d4f5c412dd1f91d0ddb371d7b5b4b not found: ID does not exist" Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.192550 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.207320 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ngc2"] Feb 17 17:03:10 crc kubenswrapper[4829]: I0217 17:03:10.296096 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" path="/var/lib/kubelet/pods/1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1/volumes" Feb 17 17:03:11 crc kubenswrapper[4829]: I0217 17:03:11.315087 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:11 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:11 crc kubenswrapper[4829]: > Feb 17 17:03:12 crc kubenswrapper[4829]: E0217 17:03:12.282713 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:14 crc kubenswrapper[4829]: I0217 17:03:14.900139 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" exitCode=0 Feb 17 17:03:14 crc kubenswrapper[4829]: I0217 17:03:14.900198 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} Feb 17 17:03:16 crc kubenswrapper[4829]: I0217 17:03:16.923286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerStarted","Data":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} Feb 17 17:03:16 crc kubenswrapper[4829]: I0217 17:03:16.942433 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x86t" podStartSLOduration=4.391819029 podStartE2EDuration="14.94241325s" podCreationTimestamp="2026-02-17 17:03:02 +0000 UTC" firstStartedPulling="2026-02-17 17:03:04.78766408 +0000 UTC m=+4097.204682058" lastFinishedPulling="2026-02-17 17:03:15.338258301 +0000 UTC m=+4107.755276279" observedRunningTime="2026-02-17 17:03:16.939340656 +0000 UTC m=+4109.356358644" watchObservedRunningTime="2026-02-17 17:03:16.94241325 +0000 UTC m=+4109.359431228" Feb 17 17:03:17 crc kubenswrapper[4829]: E0217 17:03:17.282608 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:18 crc kubenswrapper[4829]: I0217 17:03:18.105334 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" probeResult="failure" output=< Feb 17 17:03:18 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:03:18 crc kubenswrapper[4829]: > Feb 17 17:03:20 crc kubenswrapper[4829]: I0217 17:03:20.308249 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:20 crc kubenswrapper[4829]: I0217 17:03:20.359342 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:21 crc kubenswrapper[4829]: I0217 17:03:21.125240 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.020257 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ppp9d" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" containerID="cri-o://55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" gracePeriod=2 Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424729 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424778 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.424819 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.425648 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.425701 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" gracePeriod=600 Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.698534 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.791919 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792018 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792133 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") pod \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\" (UID: \"2b4f1019-63ed-4b36-93b0-5cb66837ec84\") " Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.792692 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities" (OuterVolumeSpecName: "utilities") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.793015 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.805042 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm" (OuterVolumeSpecName: "kube-api-access-cz8lm") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "kube-api-access-cz8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.861181 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b4f1019-63ed-4b36-93b0-5cb66837ec84" (UID: "2b4f1019-63ed-4b36-93b0-5cb66837ec84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.895740 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz8lm\" (UniqueName: \"kubernetes.io/projected/2b4f1019-63ed-4b36-93b0-5cb66837ec84-kube-api-access-cz8lm\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.895783 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b4f1019-63ed-4b36-93b0-5cb66837ec84-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.921351 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.921407 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:22 crc kubenswrapper[4829]: I0217 17:03:22.973022 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032324 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" exitCode=0 Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032396 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032453 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.032472 4829 scope.go:117] "RemoveContainer" containerID="dc9d48ec9a18eafe48c6e72beae6197bad0499c89ceebfd7fd583d5a02798b60" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036093 4829 generic.go:334] "Generic (PLEG): container finished" podID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" exitCode=0 Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036774 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppp9d" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.036818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.037110 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppp9d" event={"ID":"2b4f1019-63ed-4b36-93b0-5cb66837ec84","Type":"ContainerDied","Data":"bc4221c013b8694e5060973cec4461fdf7a2c473bd3d7cc81a7e5463e23d0569"} Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.086355 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.097535 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ppp9d"] Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.100205 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.127492 4829 scope.go:117] "RemoveContainer" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.144842 4829 scope.go:117] "RemoveContainer" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.171763 4829 scope.go:117] "RemoveContainer" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.236913 4829 scope.go:117] "RemoveContainer" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.237875 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": container with ID starting with 55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580 not found: ID does not exist" containerID="55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.237961 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580"} err="failed to get container status \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": rpc error: code = NotFound desc = could not find container \"55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580\": container with ID starting with 55b6cef6fbf99c0eef0b04c6d11bb5ba36ba7934890537a0ea32930618e3d580 not found: ID does not exist" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238001 4829 scope.go:117] "RemoveContainer" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.238493 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": container with ID starting with fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1 not found: ID does not exist" containerID="fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238565 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1"} err="failed to get container status \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": rpc error: code = NotFound desc = could not find container \"fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1\": container with ID starting with fc4c0786ae96d30372eaeb0c0a9f9f8030fb3b7fd35dc9bd87058df952d651e1 not found: ID does not exist" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.238639 4829 scope.go:117] "RemoveContainer" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: E0217 17:03:23.239186 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": container with ID starting with 38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810 not found: ID does not exist" containerID="38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810" Feb 17 17:03:23 crc kubenswrapper[4829]: I0217 17:03:23.239224 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810"} err="failed to get container status \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": rpc error: code = NotFound desc = could not find container \"38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810\": container with ID starting with 38b860cd822044c8b2b85bc40807231f8d7b9d0cbf39657ecfed57dd32c23810 not found: ID does not exist" Feb 17 17:03:24 crc kubenswrapper[4829]: E0217 17:03:24.283945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:24 crc kubenswrapper[4829]: I0217 17:03:24.295240 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" path="/var/lib/kubelet/pods/2b4f1019-63ed-4b36-93b0-5cb66837ec84/volumes" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.324779 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.326125 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x86t" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" containerID="cri-o://1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" gracePeriod=2 Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.926889 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971467 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971782 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.971813 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") pod \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\" (UID: \"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8\") " Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.972610 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities" (OuterVolumeSpecName: "utilities") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:25 crc kubenswrapper[4829]: I0217 17:03:25.980177 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7" (OuterVolumeSpecName: "kube-api-access-6vgw7") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "kube-api-access-6vgw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084771 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vgw7\" (UniqueName: \"kubernetes.io/projected/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-kube-api-access-6vgw7\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085061 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084895 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085265 4829 scope.go:117] "RemoveContainer" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.084853 4829 generic.go:334] "Generic (PLEG): container finished" podID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" exitCode=0 Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085660 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x86t" event={"ID":"0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8","Type":"ContainerDied","Data":"b59ad539cf0ce290be53944b90ddaf1e58595f42c17f1d94728410f8fddfbe67"} Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.085016 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x86t" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.106980 4829 scope.go:117] "RemoveContainer" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.111630 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" (UID: "0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.130658 4829 scope.go:117] "RemoveContainer" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.188050 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.227805 4829 scope.go:117] "RemoveContainer" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.228743 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": container with ID starting with 1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad not found: ID does not exist" containerID="1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.228807 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad"} err="failed to get container status \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": rpc error: code = NotFound desc = could not find container \"1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad\": container with ID starting with 1d2c4420b7cd943de58109c608fe7933c81211774a84377c4b3c4394ed7209ad not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.228838 4829 scope.go:117] "RemoveContainer" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.230621 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": container with ID starting with 5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723 not found: ID does not exist" containerID="5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.230649 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723"} err="failed to get container status \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": rpc error: code = NotFound desc = could not find container \"5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723\": container with ID starting with 5315f24ae31f7ee25330044efed89eecf019d5ea4e59f8036bdb78d59bfb2723 not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.230666 4829 scope.go:117] "RemoveContainer" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: E0217 17:03:26.231090 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": container with ID starting with 345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc not found: ID does not exist" containerID="345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.231124 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc"} err="failed to get container status \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": rpc error: code = NotFound desc = could not find container \"345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc\": container with ID starting with 345fffceb927b3accf0bce40606bd851646b85c8ff2e3b8ede782775c41426bc not found: ID does not exist" Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.419477 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:26 crc kubenswrapper[4829]: I0217 17:03:26.430535 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9x86t"] Feb 17 17:03:27 crc kubenswrapper[4829]: I0217 17:03:27.890345 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:27 crc kubenswrapper[4829]: I0217 17:03:27.991537 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:28 crc kubenswrapper[4829]: E0217 17:03:28.289476 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:28 crc kubenswrapper[4829]: I0217 17:03:28.291407 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" path="/var/lib/kubelet/pods/0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8/volumes" Feb 17 17:03:28 crc kubenswrapper[4829]: I0217 17:03:28.725395 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:29 crc kubenswrapper[4829]: I0217 17:03:29.117001 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdlcf" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" containerID="cri-o://5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" gracePeriod=2 Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.128458 4829 generic.go:334] "Generic (PLEG): container finished" podID="ece55ca0-c061-44d8-abde-b99f48421919" containerID="5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" exitCode=0 Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.128538 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842"} Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.659899 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.838706 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839028 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839165 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") pod \"ece55ca0-c061-44d8-abde-b99f48421919\" (UID: \"ece55ca0-c061-44d8-abde-b99f48421919\") " Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.839728 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities" (OuterVolumeSpecName: "utilities") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.841269 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.871423 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn" (OuterVolumeSpecName: "kube-api-access-brsxn") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "kube-api-access-brsxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.922864 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ece55ca0-c061-44d8-abde-b99f48421919" (UID: "ece55ca0-c061-44d8-abde-b99f48421919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.943897 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece55ca0-c061-44d8-abde-b99f48421919-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:30 crc kubenswrapper[4829]: I0217 17:03:30.944007 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brsxn\" (UniqueName: \"kubernetes.io/projected/ece55ca0-c061-44d8-abde-b99f48421919-kube-api-access-brsxn\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146459 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdlcf" event={"ID":"ece55ca0-c061-44d8-abde-b99f48421919","Type":"ContainerDied","Data":"a1fc745f1370e4a89f0f709e3665185a42b6cc92ee32738d4cc7001b5ecbd3de"} Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146548 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdlcf" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.146600 4829 scope.go:117] "RemoveContainer" containerID="5bfda12940aa2f5e063d241cb13d429735a7ec1a575588cf378ef2ba4fc13842" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.182475 4829 scope.go:117] "RemoveContainer" containerID="439b2ff1d322940570aa853c815c9cbc49fdcd3a6f46cb12d4ff0574367334d7" Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.182677 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.192858 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdlcf"] Feb 17 17:03:31 crc kubenswrapper[4829]: I0217 17:03:31.206136 4829 scope.go:117] "RemoveContainer" containerID="bf536347a9605d4645ef2618bf0042eac24534115b7ea44e1d759f1b375e7f0b" Feb 17 17:03:32 crc kubenswrapper[4829]: I0217 17:03:32.291418 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece55ca0-c061-44d8-abde-b99f48421919" path="/var/lib/kubelet/pods/ece55ca0-c061-44d8-abde-b99f48421919/volumes" Feb 17 17:03:35 crc kubenswrapper[4829]: E0217 17:03:35.282232 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.033811 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.034936 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.034953 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.034976 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.034983 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035009 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035017 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035029 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035038 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035053 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035061 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035086 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035095 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035112 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035119 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035128 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035136 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035147 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035155 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="extract-content" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035178 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035184 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035196 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035203 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: E0217 17:03:39.035213 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035219 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="extract-utilities" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc7574a-0f40-44f9-a1d4-0a6a4dd6c5d8" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035486 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece55ca0-c061-44d8-abde-b99f48421919" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035509 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4f1019-63ed-4b36-93b0-5cb66837ec84" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.035527 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8d01ff-56bf-4c0c-b23a-f1d39897a1e1" containerName="registry-server" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.036518 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042057 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042347 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.042965 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.043120 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.047096 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.155651 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.156303 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.156864 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259044 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259127 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.259189 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.347338 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.348487 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.358216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pwplj\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:39 crc kubenswrapper[4829]: I0217 17:03:39.364706 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:03:40 crc kubenswrapper[4829]: E0217 17:03:40.282274 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:40 crc kubenswrapper[4829]: I0217 17:03:40.316794 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj"] Feb 17 17:03:41 crc kubenswrapper[4829]: I0217 17:03:41.267203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerStarted","Data":"c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2"} Feb 17 17:03:42 crc kubenswrapper[4829]: I0217 17:03:42.295238 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerStarted","Data":"564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393"} Feb 17 17:03:42 crc kubenswrapper[4829]: I0217 17:03:42.317779 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" podStartSLOduration=1.892221039 podStartE2EDuration="3.317754919s" podCreationTimestamp="2026-02-17 17:03:39 +0000 UTC" firstStartedPulling="2026-02-17 17:03:40.330297637 +0000 UTC m=+4132.747315615" lastFinishedPulling="2026-02-17 17:03:41.755831517 +0000 UTC m=+4134.172849495" observedRunningTime="2026-02-17 17:03:42.302755765 +0000 UTC m=+4134.719773743" watchObservedRunningTime="2026-02-17 17:03:42.317754919 +0000 UTC m=+4134.734772907" Feb 17 17:03:46 crc kubenswrapper[4829]: E0217 17:03:46.283009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:03:51 crc kubenswrapper[4829]: I0217 17:03:51.283106 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433049 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433122 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.433287 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:03:51 crc kubenswrapper[4829]: E0217 17:03:51.434500 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:03:58 crc kubenswrapper[4829]: E0217 17:03:58.297222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:06 crc kubenswrapper[4829]: E0217 17:04:06.282295 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.382098 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.382949 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.383156 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:04:09 crc kubenswrapper[4829]: E0217 17:04:09.384779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:21 crc kubenswrapper[4829]: E0217 17:04:21.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:25 crc kubenswrapper[4829]: E0217 17:04:25.282216 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:34 crc kubenswrapper[4829]: E0217 17:04:34.282475 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:39 crc kubenswrapper[4829]: E0217 17:04:39.282978 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:04:47 crc kubenswrapper[4829]: E0217 17:04:47.281633 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:04:51 crc kubenswrapper[4829]: E0217 17:04:51.281564 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:02 crc kubenswrapper[4829]: E0217 17:05:02.282644 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:05 crc kubenswrapper[4829]: E0217 17:05:05.281624 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:13 crc kubenswrapper[4829]: E0217 17:05:13.281124 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:17 crc kubenswrapper[4829]: E0217 17:05:17.282981 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:22 crc kubenswrapper[4829]: I0217 17:05:22.424485 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:05:22 crc kubenswrapper[4829]: I0217 17:05:22.425191 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:05:28 crc kubenswrapper[4829]: E0217 17:05:28.281012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:30 crc kubenswrapper[4829]: E0217 17:05:30.282471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:43 crc kubenswrapper[4829]: E0217 17:05:43.281814 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:05:43 crc kubenswrapper[4829]: E0217 17:05:43.282506 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:52 crc kubenswrapper[4829]: I0217 17:05:52.424663 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:05:52 crc kubenswrapper[4829]: I0217 17:05:52.425267 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:05:55 crc kubenswrapper[4829]: E0217 17:05:55.281529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:05:55 crc kubenswrapper[4829]: E0217 17:05:55.281560 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:06 crc kubenswrapper[4829]: E0217 17:06:06.283194 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:07 crc kubenswrapper[4829]: E0217 17:06:07.281296 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:17 crc kubenswrapper[4829]: E0217 17:06:17.281495 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:19 crc kubenswrapper[4829]: E0217 17:06:19.285038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.424471 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.424987 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425028 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425598 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:06:22 crc kubenswrapper[4829]: I0217 17:06:22.425655 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" gracePeriod=600 Feb 17 17:06:22 crc kubenswrapper[4829]: E0217 17:06:22.547366 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.310926 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" exitCode=0 Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.311251 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17"} Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.311284 4829 scope.go:117] "RemoveContainer" containerID="8dcb86562181c17fec581108f0ae130af5d7ae55e13d2a5356becf2229d15594" Feb 17 17:06:23 crc kubenswrapper[4829]: I0217 17:06:23.312272 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:23 crc kubenswrapper[4829]: E0217 17:06:23.312734 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:28 crc kubenswrapper[4829]: E0217 17:06:28.287918 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:34 crc kubenswrapper[4829]: I0217 17:06:34.280569 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:34 crc kubenswrapper[4829]: E0217 17:06:34.281482 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:34 crc kubenswrapper[4829]: E0217 17:06:34.282277 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:40 crc kubenswrapper[4829]: E0217 17:06:40.282701 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:45 crc kubenswrapper[4829]: E0217 17:06:45.281865 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:06:47 crc kubenswrapper[4829]: I0217 17:06:47.279949 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:06:47 crc kubenswrapper[4829]: E0217 17:06:47.280568 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:06:54 crc kubenswrapper[4829]: E0217 17:06:54.281480 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:06:59 crc kubenswrapper[4829]: E0217 17:06:59.281960 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:01 crc kubenswrapper[4829]: I0217 17:07:01.279325 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:01 crc kubenswrapper[4829]: E0217 17:07:01.279900 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:06 crc kubenswrapper[4829]: E0217 17:07:06.283035 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:11 crc kubenswrapper[4829]: E0217 17:07:11.281253 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:12 crc kubenswrapper[4829]: I0217 17:07:12.279746 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:12 crc kubenswrapper[4829]: E0217 17:07:12.280211 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:19 crc kubenswrapper[4829]: E0217 17:07:19.285959 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:22 crc kubenswrapper[4829]: E0217 17:07:22.281395 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:24 crc kubenswrapper[4829]: I0217 17:07:24.280044 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:24 crc kubenswrapper[4829]: E0217 17:07:24.280656 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:30 crc kubenswrapper[4829]: E0217 17:07:30.281643 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:36 crc kubenswrapper[4829]: E0217 17:07:36.282418 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:37 crc kubenswrapper[4829]: I0217 17:07:37.279008 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:37 crc kubenswrapper[4829]: E0217 17:07:37.279949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:42 crc kubenswrapper[4829]: E0217 17:07:42.283363 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:07:50 crc kubenswrapper[4829]: E0217 17:07:50.281351 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:07:51 crc kubenswrapper[4829]: I0217 17:07:51.278996 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:07:51 crc kubenswrapper[4829]: E0217 17:07:51.279397 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:07:53 crc kubenswrapper[4829]: E0217 17:07:53.281229 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:01 crc kubenswrapper[4829]: E0217 17:08:01.281077 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:03 crc kubenswrapper[4829]: I0217 17:08:03.279468 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:03 crc kubenswrapper[4829]: E0217 17:08:03.280357 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:07 crc kubenswrapper[4829]: E0217 17:08:07.281410 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:12 crc kubenswrapper[4829]: E0217 17:08:12.281284 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:18 crc kubenswrapper[4829]: I0217 17:08:18.287368 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:18 crc kubenswrapper[4829]: E0217 17:08:18.288244 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:20 crc kubenswrapper[4829]: E0217 17:08:20.281226 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:25 crc kubenswrapper[4829]: E0217 17:08:25.281335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:30 crc kubenswrapper[4829]: I0217 17:08:30.279120 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:30 crc kubenswrapper[4829]: E0217 17:08:30.279996 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:33 crc kubenswrapper[4829]: E0217 17:08:33.281770 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:36 crc kubenswrapper[4829]: E0217 17:08:36.285071 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:08:45 crc kubenswrapper[4829]: I0217 17:08:45.280067 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:08:45 crc kubenswrapper[4829]: E0217 17:08:45.281024 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:08:47 crc kubenswrapper[4829]: E0217 17:08:47.281365 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:08:50 crc kubenswrapper[4829]: E0217 17:08:50.284436 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:00 crc kubenswrapper[4829]: I0217 17:09:00.280034 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:00 crc kubenswrapper[4829]: E0217 17:09:00.280861 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:01 crc kubenswrapper[4829]: E0217 17:09:01.281521 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:02 crc kubenswrapper[4829]: I0217 17:09:02.280737 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413048 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413150 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.413358 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:02 crc kubenswrapper[4829]: E0217 17:09:02.414641 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:11 crc kubenswrapper[4829]: I0217 17:09:11.279720 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:11 crc kubenswrapper[4829]: E0217 17:09:11.281200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398587 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398648 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.398765 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:12 crc kubenswrapper[4829]: E0217 17:09:12.399959 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:13 crc kubenswrapper[4829]: E0217 17:09:13.281177 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:24 crc kubenswrapper[4829]: E0217 17:09:24.284385 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:24 crc kubenswrapper[4829]: E0217 17:09:24.284462 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:25 crc kubenswrapper[4829]: I0217 17:09:25.279601 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:25 crc kubenswrapper[4829]: E0217 17:09:25.280032 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:37 crc kubenswrapper[4829]: E0217 17:09:37.281464 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:38 crc kubenswrapper[4829]: E0217 17:09:38.298991 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:40 crc kubenswrapper[4829]: I0217 17:09:40.279466 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:40 crc kubenswrapper[4829]: E0217 17:09:40.280459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:52 crc kubenswrapper[4829]: E0217 17:09:52.282305 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:09:52 crc kubenswrapper[4829]: E0217 17:09:52.283208 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:09:54 crc kubenswrapper[4829]: I0217 17:09:54.551015 4829 generic.go:334] "Generic (PLEG): container finished" podID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerID="564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393" exitCode=2 Feb 17 17:09:54 crc kubenswrapper[4829]: I0217 17:09:54.551105 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerDied","Data":"564562a2a4951a868dde05fddfe5a2bdc6e6b8563d073314ff71409a3a871393"} Feb 17 17:09:55 crc kubenswrapper[4829]: I0217 17:09:55.279398 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:09:55 crc kubenswrapper[4829]: E0217 17:09:55.279971 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.012220 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212432 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212636 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.212897 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") pod \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\" (UID: \"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64\") " Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.220240 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97" (OuterVolumeSpecName: "kube-api-access-nqz97") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "kube-api-access-nqz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.316140 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqz97\" (UniqueName: \"kubernetes.io/projected/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-kube-api-access-nqz97\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.345740 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.346140 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory" (OuterVolumeSpecName: "inventory") pod "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" (UID: "5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.418377 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.418418 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586536 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" event={"ID":"5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64","Type":"ContainerDied","Data":"c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2"} Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586587 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c548adf5a62ad1121ffd52bb442991f696d7aaf110315624c9ffb9412ab22fd2" Feb 17 17:09:56 crc kubenswrapper[4829]: I0217 17:09:56.586607 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pwplj" Feb 17 17:10:03 crc kubenswrapper[4829]: E0217 17:10:03.282269 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:06 crc kubenswrapper[4829]: E0217 17:10:06.283123 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:09 crc kubenswrapper[4829]: I0217 17:10:09.279780 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:09 crc kubenswrapper[4829]: E0217 17:10:09.280634 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:16 crc kubenswrapper[4829]: E0217 17:10:16.284720 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:17 crc kubenswrapper[4829]: E0217 17:10:17.281409 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:24 crc kubenswrapper[4829]: I0217 17:10:24.280750 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:24 crc kubenswrapper[4829]: E0217 17:10:24.283185 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:30 crc kubenswrapper[4829]: E0217 17:10:30.283212 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:31 crc kubenswrapper[4829]: E0217 17:10:31.282663 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:36 crc kubenswrapper[4829]: I0217 17:10:36.279487 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:36 crc kubenswrapper[4829]: E0217 17:10:36.280597 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:43 crc kubenswrapper[4829]: E0217 17:10:43.281023 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:10:46 crc kubenswrapper[4829]: E0217 17:10:46.281292 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:10:51 crc kubenswrapper[4829]: I0217 17:10:51.280123 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:10:51 crc kubenswrapper[4829]: E0217 17:10:51.280854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:10:58 crc kubenswrapper[4829]: E0217 17:10:58.288448 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:01 crc kubenswrapper[4829]: E0217 17:11:01.283426 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:06 crc kubenswrapper[4829]: I0217 17:11:06.279177 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:06 crc kubenswrapper[4829]: E0217 17:11:06.279985 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:11:11 crc kubenswrapper[4829]: E0217 17:11:11.281749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:15 crc kubenswrapper[4829]: E0217 17:11:15.282588 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:20 crc kubenswrapper[4829]: I0217 17:11:20.279809 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:20 crc kubenswrapper[4829]: E0217 17:11:20.280717 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:11:26 crc kubenswrapper[4829]: E0217 17:11:26.282718 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:26 crc kubenswrapper[4829]: E0217 17:11:26.282746 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:35 crc kubenswrapper[4829]: I0217 17:11:35.279482 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:11:36 crc kubenswrapper[4829]: I0217 17:11:36.664892 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} Feb 17 17:11:37 crc kubenswrapper[4829]: E0217 17:11:37.281457 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:41 crc kubenswrapper[4829]: E0217 17:11:41.283760 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:11:51 crc kubenswrapper[4829]: E0217 17:11:51.282519 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:11:54 crc kubenswrapper[4829]: E0217 17:11:54.286610 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:04 crc kubenswrapper[4829]: E0217 17:12:04.283453 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:09 crc kubenswrapper[4829]: E0217 17:12:09.284838 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:15 crc kubenswrapper[4829]: E0217 17:12:15.281425 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:20 crc kubenswrapper[4829]: E0217 17:12:20.282969 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:27 crc kubenswrapper[4829]: E0217 17:12:27.281915 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:31 crc kubenswrapper[4829]: E0217 17:12:31.281854 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:40 crc kubenswrapper[4829]: E0217 17:12:40.282336 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:46 crc kubenswrapper[4829]: E0217 17:12:46.282372 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:12:54 crc kubenswrapper[4829]: E0217 17:12:54.281631 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.750948 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:12:59 crc kubenswrapper[4829]: E0217 17:12:59.752028 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.752045 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.752478 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.756640 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.769971 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910769 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:12:59 crc kubenswrapper[4829]: I0217 17:12:59.910881 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014054 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014139 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.014176 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.015058 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.015879 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.046385 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"certified-operators-kvpv6\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.077006 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:00 crc kubenswrapper[4829]: E0217 17:13:00.304566 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:00 crc kubenswrapper[4829]: I0217 17:13:00.718017 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592608 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" exitCode=0 Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592713 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898"} Feb 17 17:13:01 crc kubenswrapper[4829]: I0217 17:13:01.592952 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"dbfa6c8eeaf887ff64e6cd6c0e72bc752700665669daf4488fa17d9addbe5bd5"} Feb 17 17:13:03 crc kubenswrapper[4829]: I0217 17:13:03.619691 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} Feb 17 17:13:04 crc kubenswrapper[4829]: I0217 17:13:04.633438 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" exitCode=0 Feb 17 17:13:04 crc kubenswrapper[4829]: I0217 17:13:04.633542 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} Feb 17 17:13:06 crc kubenswrapper[4829]: E0217 17:13:06.281095 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:06 crc kubenswrapper[4829]: I0217 17:13:06.663643 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerStarted","Data":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} Feb 17 17:13:06 crc kubenswrapper[4829]: I0217 17:13:06.693197 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvpv6" podStartSLOduration=3.912346909 podStartE2EDuration="7.693171606s" podCreationTimestamp="2026-02-17 17:12:59 +0000 UTC" firstStartedPulling="2026-02-17 17:13:01.596368279 +0000 UTC m=+4694.013386267" lastFinishedPulling="2026-02-17 17:13:05.377192986 +0000 UTC m=+4697.794210964" observedRunningTime="2026-02-17 17:13:06.685053636 +0000 UTC m=+4699.102071634" watchObservedRunningTime="2026-02-17 17:13:06.693171606 +0000 UTC m=+4699.110189584" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.078179 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.078518 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.141180 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.786445 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:10 crc kubenswrapper[4829]: I0217 17:13:10.849610 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:12 crc kubenswrapper[4829]: I0217 17:13:12.752814 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvpv6" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" containerID="cri-o://d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" gracePeriod=2 Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.314782 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.364836 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.365015 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.365380 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") pod \"0129998b-a7ba-43ce-be38-40e50b1fd26d\" (UID: \"0129998b-a7ba-43ce-be38-40e50b1fd26d\") " Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.367347 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities" (OuterVolumeSpecName: "utilities") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.370560 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.371674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc" (OuterVolumeSpecName: "kube-api-access-ct4bc") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "kube-api-access-ct4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.435079 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0129998b-a7ba-43ce-be38-40e50b1fd26d" (UID: "0129998b-a7ba-43ce-be38-40e50b1fd26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.472823 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4bc\" (UniqueName: \"kubernetes.io/projected/0129998b-a7ba-43ce-be38-40e50b1fd26d-kube-api-access-ct4bc\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.472875 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0129998b-a7ba-43ce-be38-40e50b1fd26d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.762916 4829 generic.go:334] "Generic (PLEG): container finished" podID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" exitCode=0 Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.762999 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.764670 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvpv6" event={"ID":"0129998b-a7ba-43ce-be38-40e50b1fd26d","Type":"ContainerDied","Data":"dbfa6c8eeaf887ff64e6cd6c0e72bc752700665669daf4488fa17d9addbe5bd5"} Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.763009 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvpv6" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.764716 4829 scope.go:117] "RemoveContainer" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.801115 4829 scope.go:117] "RemoveContainer" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.805904 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.818197 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvpv6"] Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.826472 4829 scope.go:117] "RemoveContainer" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.875434 4829 scope.go:117] "RemoveContainer" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.876153 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": container with ID starting with d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7 not found: ID does not exist" containerID="d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876216 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7"} err="failed to get container status \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": rpc error: code = NotFound desc = could not find container \"d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7\": container with ID starting with d352267948d70bb3c08650ad6a3a7a21b426332e66d3f497980671fa1e3e64e7 not found: ID does not exist" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876259 4829 scope.go:117] "RemoveContainer" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.876841 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": container with ID starting with b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058 not found: ID does not exist" containerID="b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876876 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058"} err="failed to get container status \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": rpc error: code = NotFound desc = could not find container \"b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058\": container with ID starting with b85424114fc841d60eb0fbddce49fe45e3c3e9bc21d13fe24111966fa1863058 not found: ID does not exist" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.876900 4829 scope.go:117] "RemoveContainer" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: E0217 17:13:13.877328 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": container with ID starting with ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898 not found: ID does not exist" containerID="ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898" Feb 17 17:13:13 crc kubenswrapper[4829]: I0217 17:13:13.877358 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898"} err="failed to get container status \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": rpc error: code = NotFound desc = could not find container \"ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898\": container with ID starting with ed092e533d091fe42a2030ef85ed3bc2f82c721d8b7c7e237e22bb6542f64898 not found: ID does not exist" Feb 17 17:13:14 crc kubenswrapper[4829]: E0217 17:13:14.287584 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:14 crc kubenswrapper[4829]: I0217 17:13:14.296843 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" path="/var/lib/kubelet/pods/0129998b-a7ba-43ce-be38-40e50b1fd26d/volumes" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.249084 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250249 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-content" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250268 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-content" Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250282 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250290 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: E0217 17:13:18.250299 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-utilities" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250307 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="extract-utilities" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.250610 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="0129998b-a7ba-43ce-be38-40e50b1fd26d" containerName="registry-server" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.252940 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.261779 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361504 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361565 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.361734 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464278 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464507 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464544 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.464795 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.465033 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.487362 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"redhat-marketplace-lxd6h\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:18 crc kubenswrapper[4829]: I0217 17:13:18.595365 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.184318 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:19 crc kubenswrapper[4829]: W0217 17:13:19.191550 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3af34c_4b38_44da_a726_72f1565c3fc8.slice/crio-a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e WatchSource:0}: Error finding container a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e: Status 404 returned error can't find the container with id a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e Feb 17 17:13:19 crc kubenswrapper[4829]: E0217 17:13:19.283013 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.824864 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" exitCode=0 Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.824953 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a"} Feb 17 17:13:19 crc kubenswrapper[4829]: I0217 17:13:19.825143 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e"} Feb 17 17:13:21 crc kubenswrapper[4829]: I0217 17:13:21.887970 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} Feb 17 17:13:23 crc kubenswrapper[4829]: I0217 17:13:23.909623 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" exitCode=0 Feb 17 17:13:23 crc kubenswrapper[4829]: I0217 17:13:23.909704 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} Feb 17 17:13:24 crc kubenswrapper[4829]: I0217 17:13:24.924187 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerStarted","Data":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} Feb 17 17:13:24 crc kubenswrapper[4829]: I0217 17:13:24.963443 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxd6h" podStartSLOduration=2.448047371 podStartE2EDuration="6.963421364s" podCreationTimestamp="2026-02-17 17:13:18 +0000 UTC" firstStartedPulling="2026-02-17 17:13:19.830368438 +0000 UTC m=+4712.247386416" lastFinishedPulling="2026-02-17 17:13:24.345742431 +0000 UTC m=+4716.762760409" observedRunningTime="2026-02-17 17:13:24.956725314 +0000 UTC m=+4717.373743302" watchObservedRunningTime="2026-02-17 17:13:24.963421364 +0000 UTC m=+4717.380439342" Feb 17 17:13:27 crc kubenswrapper[4829]: E0217 17:13:27.283283 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.596229 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.596549 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:28 crc kubenswrapper[4829]: I0217 17:13:28.644207 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:32 crc kubenswrapper[4829]: E0217 17:13:32.284002 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:38 crc kubenswrapper[4829]: I0217 17:13:38.670173 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:38 crc kubenswrapper[4829]: I0217 17:13:38.726120 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.061281 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxd6h" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" containerID="cri-o://b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" gracePeriod=2 Feb 17 17:13:39 crc kubenswrapper[4829]: E0217 17:13:39.287593 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.651731 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.734998 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.735210 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.735236 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") pod \"cd3af34c-4b38-44da-a726-72f1565c3fc8\" (UID: \"cd3af34c-4b38-44da-a726-72f1565c3fc8\") " Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.739493 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities" (OuterVolumeSpecName: "utilities") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.744597 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x" (OuterVolumeSpecName: "kube-api-access-ztv4x") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "kube-api-access-ztv4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.764820 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd3af34c-4b38-44da-a726-72f1565c3fc8" (UID: "cd3af34c-4b38-44da-a726-72f1565c3fc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838867 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838901 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3af34c-4b38-44da-a726-72f1565c3fc8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:39 crc kubenswrapper[4829]: I0217 17:13:39.838912 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztv4x\" (UniqueName: \"kubernetes.io/projected/cd3af34c-4b38-44da-a726-72f1565c3fc8-kube-api-access-ztv4x\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.072985 4829 generic.go:334] "Generic (PLEG): container finished" podID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" exitCode=0 Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073037 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073050 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxd6h" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073076 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxd6h" event={"ID":"cd3af34c-4b38-44da-a726-72f1565c3fc8","Type":"ContainerDied","Data":"a0e822b884eda9b484abedd03e3d50813a8d60c194e1e5c4971372a76de04d5e"} Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.073099 4829 scope.go:117] "RemoveContainer" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.103879 4829 scope.go:117] "RemoveContainer" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.119384 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.128731 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxd6h"] Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.130882 4829 scope.go:117] "RemoveContainer" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.181491 4829 scope.go:117] "RemoveContainer" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182061 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": container with ID starting with b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9 not found: ID does not exist" containerID="b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182093 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9"} err="failed to get container status \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": rpc error: code = NotFound desc = could not find container \"b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9\": container with ID starting with b3867d287a0fc5f8af135fec31ad00bf5f6c326d5ecea1c1105e7ea889e5a4c9 not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182121 4829 scope.go:117] "RemoveContainer" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182417 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": container with ID starting with 52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28 not found: ID does not exist" containerID="52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182449 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28"} err="failed to get container status \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": rpc error: code = NotFound desc = could not find container \"52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28\": container with ID starting with 52b9836fa81ef4e77bb59b0c374f8f7306f0c81ea04fca29a9766a170dc6bc28 not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182471 4829 scope.go:117] "RemoveContainer" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: E0217 17:13:40.182822 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": container with ID starting with 784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a not found: ID does not exist" containerID="784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.182872 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a"} err="failed to get container status \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": rpc error: code = NotFound desc = could not find container \"784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a\": container with ID starting with 784da21722513c259e1f6482a900051bc9bddee257ee3d3c10a21f7eb0f4851a not found: ID does not exist" Feb 17 17:13:40 crc kubenswrapper[4829]: I0217 17:13:40.293006 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" path="/var/lib/kubelet/pods/cd3af34c-4b38-44da-a726-72f1565c3fc8/volumes" Feb 17 17:13:47 crc kubenswrapper[4829]: E0217 17:13:47.281988 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:51 crc kubenswrapper[4829]: E0217 17:13:51.283375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:13:52 crc kubenswrapper[4829]: I0217 17:13:52.425067 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:13:52 crc kubenswrapper[4829]: I0217 17:13:52.425467 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.292725 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293280 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-utilities" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293304 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-utilities" Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293327 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293335 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: E0217 17:13:53.293358 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-content" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293366 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="extract-content" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.293682 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3af34c-4b38-44da-a726-72f1565c3fc8" containerName="registry-server" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.295950 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.314836 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379657 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379781 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.379895 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481572 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481646 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.481719 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.482157 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.482190 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.657803 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"community-operators-vztn2\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:53 crc kubenswrapper[4829]: I0217 17:13:53.922747 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:13:54 crc kubenswrapper[4829]: I0217 17:13:54.455460 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:13:54 crc kubenswrapper[4829]: W0217 17:13:54.461506 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a19588b_3fe9_4064_8fc0_b9053f7efdf8.slice/crio-f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece WatchSource:0}: Error finding container f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece: Status 404 returned error can't find the container with id f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236853 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad" exitCode=0 Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236903 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad"} Feb 17 17:13:55 crc kubenswrapper[4829]: I0217 17:13:55.236932 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece"} Feb 17 17:13:56 crc kubenswrapper[4829]: I0217 17:13:56.250323 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d"} Feb 17 17:13:58 crc kubenswrapper[4829]: I0217 17:13:58.274952 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d" exitCode=0 Feb 17 17:13:58 crc kubenswrapper[4829]: I0217 17:13:58.275047 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d"} Feb 17 17:13:58 crc kubenswrapper[4829]: E0217 17:13:58.280837 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:13:59 crc kubenswrapper[4829]: I0217 17:13:59.291434 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerStarted","Data":"8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99"} Feb 17 17:13:59 crc kubenswrapper[4829]: I0217 17:13:59.318258 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vztn2" podStartSLOduration=2.8591588100000003 podStartE2EDuration="6.31823727s" podCreationTimestamp="2026-02-17 17:13:53 +0000 UTC" firstStartedPulling="2026-02-17 17:13:55.239247316 +0000 UTC m=+4747.656265294" lastFinishedPulling="2026-02-17 17:13:58.698325776 +0000 UTC m=+4751.115343754" observedRunningTime="2026-02-17 17:13:59.314730306 +0000 UTC m=+4751.731748284" watchObservedRunningTime="2026-02-17 17:13:59.31823727 +0000 UTC m=+4751.735255248" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.870198 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.873970 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.884795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.945688 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.945832 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:02 crc kubenswrapper[4829]: I0217 17:14:02.946143 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.050282 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.050908 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051335 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051404 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.051974 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.450932 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"redhat-operators-n675c\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.513485 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.923712 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.924083 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:03 crc kubenswrapper[4829]: I0217 17:14:03.986689 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.090905 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.363824 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.363869 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"672a375342d36e66eb68b76fc86c5bb513917a01ecc45fb25bf0a6473d4b6768"} Feb 17 17:14:04 crc kubenswrapper[4829]: I0217 17:14:04.424121 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.374539 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" exitCode=0 Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.374620 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} Feb 17 17:14:05 crc kubenswrapper[4829]: I0217 17:14:05.377204 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.234508 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.392837 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} Feb 17 17:14:06 crc kubenswrapper[4829]: I0217 17:14:06.393520 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vztn2" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" containerID="cri-o://8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" gracePeriod=2 Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402404 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402450 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.402588 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:06 crc kubenswrapper[4829]: E0217 17:14:06.403777 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:07 crc kubenswrapper[4829]: I0217 17:14:07.404857 4829 generic.go:334] "Generic (PLEG): container finished" podID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerID="8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" exitCode=0 Feb 17 17:14:07 crc kubenswrapper[4829]: I0217 17:14:07.405055 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99"} Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.129657 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322139 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322356 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.322465 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") pod \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\" (UID: \"7a19588b-3fe9-4064-8fc0-b9053f7efdf8\") " Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.323525 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities" (OuterVolumeSpecName: "utilities") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.328256 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s" (OuterVolumeSpecName: "kube-api-access-d9d5s") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "kube-api-access-d9d5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418201 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vztn2" event={"ID":"7a19588b-3fe9-4064-8fc0-b9053f7efdf8","Type":"ContainerDied","Data":"f934319167f3fb7db54ebfde485a2a2c7601a5747b72319da5aa2478a6b20ece"} Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418250 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vztn2" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.418912 4829 scope.go:117] "RemoveContainer" containerID="8addfb6b2a3db286ad110e41ef7cba1bb6d2a207bea5c7c9f8b7a96de70bdb99" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.425354 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9d5s\" (UniqueName: \"kubernetes.io/projected/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-kube-api-access-d9d5s\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.425384 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.445256 4829 scope.go:117] "RemoveContainer" containerID="755e895ed5a5edd62fdde56616624c85702caaccb93e02a2046158788d6ff29d" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.482441 4829 scope.go:117] "RemoveContainer" containerID="bf2e0601a62e24a4491550474556a29ddaec747df31510f28fbff977bce6afad" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.776160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a19588b-3fe9-4064-8fc0-b9053f7efdf8" (UID: "7a19588b-3fe9-4064-8fc0-b9053f7efdf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:08 crc kubenswrapper[4829]: I0217 17:14:08.839315 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a19588b-3fe9-4064-8fc0-b9053f7efdf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:09 crc kubenswrapper[4829]: I0217 17:14:09.058222 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:09 crc kubenswrapper[4829]: I0217 17:14:09.068611 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vztn2"] Feb 17 17:14:10 crc kubenswrapper[4829]: I0217 17:14:10.295099 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" path="/var/lib/kubelet/pods/7a19588b-3fe9-4064-8fc0-b9053f7efdf8/volumes" Feb 17 17:14:12 crc kubenswrapper[4829]: I0217 17:14:12.458783 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" exitCode=0 Feb 17 17:14:12 crc kubenswrapper[4829]: I0217 17:14:12.458868 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.388651 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.389003 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.389153 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:13 crc kubenswrapper[4829]: E0217 17:14:13.390725 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.472219 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerStarted","Data":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.498468 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n675c" podStartSLOduration=4.017884628 podStartE2EDuration="11.498449197s" podCreationTimestamp="2026-02-17 17:14:02 +0000 UTC" firstStartedPulling="2026-02-17 17:14:05.376951904 +0000 UTC m=+4757.793969882" lastFinishedPulling="2026-02-17 17:14:12.857516453 +0000 UTC m=+4765.274534451" observedRunningTime="2026-02-17 17:14:13.489456843 +0000 UTC m=+4765.906474821" watchObservedRunningTime="2026-02-17 17:14:13.498449197 +0000 UTC m=+4765.915467175" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.514906 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:13 crc kubenswrapper[4829]: I0217 17:14:13.514959 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:14 crc kubenswrapper[4829]: I0217 17:14:14.569216 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n675c" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" probeResult="failure" output=< Feb 17 17:14:14 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:14:14 crc kubenswrapper[4829]: > Feb 17 17:14:17 crc kubenswrapper[4829]: E0217 17:14:17.281533 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:22 crc kubenswrapper[4829]: I0217 17:14:22.425502 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:22 crc kubenswrapper[4829]: I0217 17:14:22.426151 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.467167 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.524361 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:24 crc kubenswrapper[4829]: I0217 17:14:24.706504 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:25 crc kubenswrapper[4829]: I0217 17:14:25.617490 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n675c" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" containerID="cri-o://3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" gracePeriod=2 Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.628350 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631124 4829 generic.go:334] "Generic (PLEG): container finished" podID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" exitCode=0 Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631163 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n675c" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631172 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631204 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n675c" event={"ID":"9fcf4ba0-36bd-4bfe-89aa-b295791b5961","Type":"ContainerDied","Data":"672a375342d36e66eb68b76fc86c5bb513917a01ecc45fb25bf0a6473d4b6768"} Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.631225 4829 scope.go:117] "RemoveContainer" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.660680 4829 scope.go:117] "RemoveContainer" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.697353 4829 scope.go:117] "RemoveContainer" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748341 4829 scope.go:117] "RemoveContainer" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.748694 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": container with ID starting with 3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7 not found: ID does not exist" containerID="3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748722 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7"} err="failed to get container status \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": rpc error: code = NotFound desc = could not find container \"3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7\": container with ID starting with 3e7f9acc9ec4df585debd7272ecc863cced6b067e38350f5e2d105d7a76d57c7 not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.748744 4829 scope.go:117] "RemoveContainer" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.749045 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": container with ID starting with f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a not found: ID does not exist" containerID="f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749084 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a"} err="failed to get container status \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": rpc error: code = NotFound desc = could not find container \"f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a\": container with ID starting with f79824145d2e7163cc845b98ca29d44113fd65c7257bd1f5fb38efcce748053a not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749109 4829 scope.go:117] "RemoveContainer" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: E0217 17:14:26.749341 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": container with ID starting with d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef not found: ID does not exist" containerID="d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.749373 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef"} err="failed to get container status \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": rpc error: code = NotFound desc = could not find container \"d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef\": container with ID starting with d12da3ca4b13c98f666e519faecf6f975b22bdeb1a690e1a2821288d6c1f42ef not found: ID does not exist" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.754767 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.754839 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.755111 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") pod \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\" (UID: \"9fcf4ba0-36bd-4bfe-89aa-b295791b5961\") " Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.755874 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities" (OuterVolumeSpecName: "utilities") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.762518 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6" (OuterVolumeSpecName: "kube-api-access-tkss6") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "kube-api-access-tkss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.858936 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkss6\" (UniqueName: \"kubernetes.io/projected/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-kube-api-access-tkss6\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.858976 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.899674 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fcf4ba0-36bd-4bfe-89aa-b295791b5961" (UID: "9fcf4ba0-36bd-4bfe-89aa-b295791b5961"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.966919 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fcf4ba0-36bd-4bfe-89aa-b295791b5961-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.975263 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:26 crc kubenswrapper[4829]: I0217 17:14:26.988047 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n675c"] Feb 17 17:14:28 crc kubenswrapper[4829]: E0217 17:14:28.289483 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:28 crc kubenswrapper[4829]: I0217 17:14:28.295883 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" path="/var/lib/kubelet/pods/9fcf4ba0-36bd-4bfe-89aa-b295791b5961/volumes" Feb 17 17:14:31 crc kubenswrapper[4829]: E0217 17:14:31.281291 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:41 crc kubenswrapper[4829]: E0217 17:14:41.284009 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:42 crc kubenswrapper[4829]: E0217 17:14:42.284034 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.424473 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.426208 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.426285 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.427552 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.427695 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" gracePeriod=600 Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916723 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" exitCode=0 Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916778 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787"} Feb 17 17:14:52 crc kubenswrapper[4829]: I0217 17:14:52.916825 4829 scope.go:117] "RemoveContainer" containerID="93ee334d7e7e02a536d91070eeb36dc75940d4c24f90b05ed18ad5fc35587b17" Feb 17 17:14:53 crc kubenswrapper[4829]: E0217 17:14:53.280375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:14:53 crc kubenswrapper[4829]: E0217 17:14:53.280375 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:14:53 crc kubenswrapper[4829]: I0217 17:14:53.928668 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.163006 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164120 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164136 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164154 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164160 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164168 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164175 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164192 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164198 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164218 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164225 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: E0217 17:15:00.164241 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164247 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164465 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcf4ba0-36bd-4bfe-89aa-b295791b5961" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.164478 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a19588b-3fe9-4064-8fc0-b9053f7efdf8" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.165400 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.169639 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.169854 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174757 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174811 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.174915 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.194772 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277276 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277326 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.277386 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.278541 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.292054 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.304372 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"collect-profiles-29522475-bchdn\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:00 crc kubenswrapper[4829]: I0217 17:15:00.497183 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:01 crc kubenswrapper[4829]: I0217 17:15:01.035061 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn"] Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.018203 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerStarted","Data":"df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e"} Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.019919 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerStarted","Data":"9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021"} Feb 17 17:15:02 crc kubenswrapper[4829]: I0217 17:15:02.037144 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" podStartSLOduration=2.037124687 podStartE2EDuration="2.037124687s" podCreationTimestamp="2026-02-17 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:15:02.03503041 +0000 UTC m=+4814.452048408" watchObservedRunningTime="2026-02-17 17:15:02.037124687 +0000 UTC m=+4814.454142665" Feb 17 17:15:03 crc kubenswrapper[4829]: I0217 17:15:03.033360 4829 generic.go:334] "Generic (PLEG): container finished" podID="fe68a533-c785-4f43-bee6-b83031125f08" containerID="df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e" exitCode=0 Feb 17 17:15:03 crc kubenswrapper[4829]: I0217 17:15:03.033446 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerDied","Data":"df7e1ad189e4928829332540a4bde38f1cc610a2f54550bb44671669d7f9587e"} Feb 17 17:15:04 crc kubenswrapper[4829]: E0217 17:15:04.284459 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.464967 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608307 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608590 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.608655 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") pod \"fe68a533-c785-4f43-bee6-b83031125f08\" (UID: \"fe68a533-c785-4f43-bee6-b83031125f08\") " Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.609196 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.615289 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.615340 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq" (OuterVolumeSpecName: "kube-api-access-r86rq") pod "fe68a533-c785-4f43-bee6-b83031125f08" (UID: "fe68a533-c785-4f43-bee6-b83031125f08"). InnerVolumeSpecName "kube-api-access-r86rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711186 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe68a533-c785-4f43-bee6-b83031125f08-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711234 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe68a533-c785-4f43-bee6-b83031125f08-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4829]: I0217 17:15:04.711249 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r86rq\" (UniqueName: \"kubernetes.io/projected/fe68a533-c785-4f43-bee6-b83031125f08-kube-api-access-r86rq\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.063920 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" event={"ID":"fe68a533-c785-4f43-bee6-b83031125f08","Type":"ContainerDied","Data":"9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021"} Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.064205 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a68beb418a81c6c0530f8bb1695e5cb7095f889ac306d8555f54a1571ddc021" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.064268 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-bchdn" Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.146617 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 17:15:05 crc kubenswrapper[4829]: I0217 17:15:05.164356 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-gmcbj"] Feb 17 17:15:06 crc kubenswrapper[4829]: E0217 17:15:06.281926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:06 crc kubenswrapper[4829]: I0217 17:15:06.292675 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3000c07b-e126-4f72-9667-251ca9a53989" path="/var/lib/kubelet/pods/3000c07b-e126-4f72-9667-251ca9a53989/volumes" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.046346 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:13 crc kubenswrapper[4829]: E0217 17:15:13.048460 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.048564 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.049009 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe68a533-c785-4f43-bee6-b83031125f08" containerName="collect-profiles" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.050300 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054109 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054128 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054363 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7rlh9" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.054548 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.057361 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136157 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136220 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.136263 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238456 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238517 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.238548 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.244250 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.245150 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.256024 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:13 crc kubenswrapper[4829]: I0217 17:15:13.404439 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:15:14 crc kubenswrapper[4829]: I0217 17:15:14.587287 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw"] Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.486799 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerStarted","Data":"b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55"} Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.487190 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerStarted","Data":"ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752"} Feb 17 17:15:15 crc kubenswrapper[4829]: I0217 17:15:15.510367 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" podStartSLOduration=2.045451122 podStartE2EDuration="2.510344521s" podCreationTimestamp="2026-02-17 17:15:13 +0000 UTC" firstStartedPulling="2026-02-17 17:15:14.592420444 +0000 UTC m=+4827.009438422" lastFinishedPulling="2026-02-17 17:15:15.057313843 +0000 UTC m=+4827.474331821" observedRunningTime="2026-02-17 17:15:15.499752505 +0000 UTC m=+4827.916770493" watchObservedRunningTime="2026-02-17 17:15:15.510344521 +0000 UTC m=+4827.927362499" Feb 17 17:15:17 crc kubenswrapper[4829]: E0217 17:15:17.280795 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:19 crc kubenswrapper[4829]: I0217 17:15:19.887401 4829 scope.go:117] "RemoveContainer" containerID="95dd55496f8a09ae435d254d199266ef120fffad020e7c4106b2896b4593290f" Feb 17 17:15:20 crc kubenswrapper[4829]: E0217 17:15:20.282156 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:31 crc kubenswrapper[4829]: E0217 17:15:31.281738 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:35 crc kubenswrapper[4829]: E0217 17:15:35.282471 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:44 crc kubenswrapper[4829]: E0217 17:15:44.283763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:46 crc kubenswrapper[4829]: E0217 17:15:46.281708 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:15:55 crc kubenswrapper[4829]: E0217 17:15:55.284091 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:15:59 crc kubenswrapper[4829]: E0217 17:15:59.283164 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:10 crc kubenswrapper[4829]: E0217 17:16:10.288367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:10 crc kubenswrapper[4829]: E0217 17:16:10.289815 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:21 crc kubenswrapper[4829]: E0217 17:16:21.281823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:22 crc kubenswrapper[4829]: E0217 17:16:22.281683 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:33 crc kubenswrapper[4829]: E0217 17:16:33.282646 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:37 crc kubenswrapper[4829]: E0217 17:16:37.286277 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:44 crc kubenswrapper[4829]: E0217 17:16:44.282810 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:16:52 crc kubenswrapper[4829]: E0217 17:16:52.285662 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:16:56 crc kubenswrapper[4829]: E0217 17:16:56.284129 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:03 crc kubenswrapper[4829]: E0217 17:17:03.281880 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:11 crc kubenswrapper[4829]: E0217 17:17:11.282089 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:15 crc kubenswrapper[4829]: E0217 17:17:15.282504 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:22 crc kubenswrapper[4829]: I0217 17:17:22.424864 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:22 crc kubenswrapper[4829]: I0217 17:17:22.425408 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:23 crc kubenswrapper[4829]: E0217 17:17:23.281496 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:28 crc kubenswrapper[4829]: E0217 17:17:28.291967 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:36 crc kubenswrapper[4829]: E0217 17:17:36.282713 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:41 crc kubenswrapper[4829]: E0217 17:17:41.282313 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:17:51 crc kubenswrapper[4829]: E0217 17:17:51.282187 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:17:52 crc kubenswrapper[4829]: I0217 17:17:52.424334 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:52 crc kubenswrapper[4829]: I0217 17:17:52.424401 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:56 crc kubenswrapper[4829]: E0217 17:17:56.283478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:05 crc kubenswrapper[4829]: E0217 17:18:05.281822 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:10 crc kubenswrapper[4829]: E0217 17:18:10.284779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:20 crc kubenswrapper[4829]: E0217 17:18:20.281878 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.424938 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.425439 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.425484 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.426372 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:18:22 crc kubenswrapper[4829]: I0217 17:18:22.426428 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" gracePeriod=600 Feb 17 17:18:22 crc kubenswrapper[4829]: E0217 17:18:22.559903 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.555672 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" exitCode=0 Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.555764 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80"} Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.556021 4829 scope.go:117] "RemoveContainer" containerID="e216e85147f559503eec25bca9cb65e443f36e00c349c94fc0baac207d843787" Feb 17 17:18:23 crc kubenswrapper[4829]: I0217 17:18:23.557040 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:23 crc kubenswrapper[4829]: E0217 17:18:23.557507 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:24 crc kubenswrapper[4829]: E0217 17:18:24.283388 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:35 crc kubenswrapper[4829]: E0217 17:18:35.281505 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:36 crc kubenswrapper[4829]: E0217 17:18:36.281602 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:38 crc kubenswrapper[4829]: I0217 17:18:38.279508 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:38 crc kubenswrapper[4829]: E0217 17:18:38.280427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:46 crc kubenswrapper[4829]: E0217 17:18:46.281829 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:18:50 crc kubenswrapper[4829]: E0217 17:18:50.285412 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:18:51 crc kubenswrapper[4829]: I0217 17:18:51.279597 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:18:51 crc kubenswrapper[4829]: E0217 17:18:51.280220 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:18:58 crc kubenswrapper[4829]: E0217 17:18:58.288273 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:03 crc kubenswrapper[4829]: I0217 17:19:03.279568 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:03 crc kubenswrapper[4829]: E0217 17:19:03.281092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:04 crc kubenswrapper[4829]: E0217 17:19:04.281559 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:10 crc kubenswrapper[4829]: I0217 17:19:10.284752 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409396 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409714 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.409850 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:10 crc kubenswrapper[4829]: E0217 17:19:10.411117 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.380303 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.380823 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.381190 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:15 crc kubenswrapper[4829]: E0217 17:19:15.382990 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:18 crc kubenswrapper[4829]: I0217 17:19:18.290624 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:18 crc kubenswrapper[4829]: E0217 17:19:18.291900 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:22 crc kubenswrapper[4829]: E0217 17:19:22.283675 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:28 crc kubenswrapper[4829]: E0217 17:19:28.296687 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:33 crc kubenswrapper[4829]: I0217 17:19:33.279784 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:33 crc kubenswrapper[4829]: E0217 17:19:33.280718 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:37 crc kubenswrapper[4829]: E0217 17:19:37.281456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:40 crc kubenswrapper[4829]: E0217 17:19:40.286447 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:19:47 crc kubenswrapper[4829]: I0217 17:19:47.280339 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:19:47 crc kubenswrapper[4829]: E0217 17:19:47.281318 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:19:50 crc kubenswrapper[4829]: E0217 17:19:50.289337 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:19:53 crc kubenswrapper[4829]: E0217 17:19:53.282285 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:00 crc kubenswrapper[4829]: I0217 17:20:00.280211 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:00 crc kubenswrapper[4829]: E0217 17:20:00.281020 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:03 crc kubenswrapper[4829]: E0217 17:20:03.281779 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:08 crc kubenswrapper[4829]: E0217 17:20:08.290722 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:12 crc kubenswrapper[4829]: I0217 17:20:12.280491 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:12 crc kubenswrapper[4829]: E0217 17:20:12.280763 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:16 crc kubenswrapper[4829]: E0217 17:20:16.282404 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:21 crc kubenswrapper[4829]: E0217 17:20:21.282103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:25 crc kubenswrapper[4829]: I0217 17:20:25.279862 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:25 crc kubenswrapper[4829]: E0217 17:20:25.280782 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:30 crc kubenswrapper[4829]: E0217 17:20:30.282614 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:34 crc kubenswrapper[4829]: E0217 17:20:34.283103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:38 crc kubenswrapper[4829]: I0217 17:20:38.287750 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:38 crc kubenswrapper[4829]: E0217 17:20:38.288511 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:41 crc kubenswrapper[4829]: E0217 17:20:41.281323 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:46 crc kubenswrapper[4829]: E0217 17:20:46.286943 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:20:52 crc kubenswrapper[4829]: I0217 17:20:52.281937 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:20:52 crc kubenswrapper[4829]: E0217 17:20:52.282762 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:20:55 crc kubenswrapper[4829]: E0217 17:20:55.282321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:20:57 crc kubenswrapper[4829]: E0217 17:20:57.287128 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:04 crc kubenswrapper[4829]: I0217 17:21:04.279356 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:04 crc kubenswrapper[4829]: E0217 17:21:04.280397 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:06 crc kubenswrapper[4829]: E0217 17:21:06.282546 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:10 crc kubenswrapper[4829]: E0217 17:21:10.282335 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:15 crc kubenswrapper[4829]: I0217 17:21:15.280013 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:15 crc kubenswrapper[4829]: E0217 17:21:15.280692 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:18 crc kubenswrapper[4829]: E0217 17:21:18.290923 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:25 crc kubenswrapper[4829]: E0217 17:21:25.281540 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:28 crc kubenswrapper[4829]: I0217 17:21:28.295223 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:28 crc kubenswrapper[4829]: E0217 17:21:28.296945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:29 crc kubenswrapper[4829]: I0217 17:21:29.471285 4829 generic.go:334] "Generic (PLEG): container finished" podID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerID="b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55" exitCode=2 Feb 17 17:21:29 crc kubenswrapper[4829]: I0217 17:21:29.471457 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerDied","Data":"b417b277d1b59732230bd5fe7d6a234dfcc6488960571858881c4f7a21209f55"} Feb 17 17:21:30 crc kubenswrapper[4829]: I0217 17:21:30.976260 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056337 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056753 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.056835 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") pod \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\" (UID: \"70fdafba-a123-4ccf-bcde-f3027dcbbf1b\") " Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.063060 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq" (OuterVolumeSpecName: "kube-api-access-5zxvq") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "kube-api-access-5zxvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.087632 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory" (OuterVolumeSpecName: "inventory") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.091207 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70fdafba-a123-4ccf-bcde-f3027dcbbf1b" (UID: "70fdafba-a123-4ccf-bcde-f3027dcbbf1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160820 4829 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160867 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zxvq\" (UniqueName: \"kubernetes.io/projected/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-kube-api-access-5zxvq\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.160884 4829 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70fdafba-a123-4ccf-bcde-f3027dcbbf1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492479 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" event={"ID":"70fdafba-a123-4ccf-bcde-f3027dcbbf1b","Type":"ContainerDied","Data":"ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752"} Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492512 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ede3fc4dfca23d93a560843285d02b4357d4351e06b51ca527a6c91c3cf9c752" Feb 17 17:21:31 crc kubenswrapper[4829]: I0217 17:21:31.492617 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw" Feb 17 17:21:33 crc kubenswrapper[4829]: E0217 17:21:33.281732 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:36 crc kubenswrapper[4829]: E0217 17:21:36.283319 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:40 crc kubenswrapper[4829]: I0217 17:21:40.281179 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:40 crc kubenswrapper[4829]: E0217 17:21:40.282042 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:48 crc kubenswrapper[4829]: E0217 17:21:48.294527 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:21:50 crc kubenswrapper[4829]: E0217 17:21:50.282188 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:21:52 crc kubenswrapper[4829]: I0217 17:21:52.279400 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:21:52 crc kubenswrapper[4829]: E0217 17:21:52.280044 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:21:59 crc kubenswrapper[4829]: E0217 17:21:59.287409 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:02 crc kubenswrapper[4829]: E0217 17:22:02.283342 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:06 crc kubenswrapper[4829]: I0217 17:22:06.280246 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:06 crc kubenswrapper[4829]: E0217 17:22:06.281102 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:14 crc kubenswrapper[4829]: E0217 17:22:14.341390 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:14 crc kubenswrapper[4829]: E0217 17:22:14.341483 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:18 crc kubenswrapper[4829]: I0217 17:22:18.286756 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:18 crc kubenswrapper[4829]: E0217 17:22:18.287321 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:26 crc kubenswrapper[4829]: E0217 17:22:26.280989 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:28 crc kubenswrapper[4829]: E0217 17:22:28.291084 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:30 crc kubenswrapper[4829]: I0217 17:22:30.280118 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:30 crc kubenswrapper[4829]: E0217 17:22:30.281004 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:39 crc kubenswrapper[4829]: E0217 17:22:39.288064 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:41 crc kubenswrapper[4829]: E0217 17:22:41.282839 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:45 crc kubenswrapper[4829]: I0217 17:22:45.280072 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:45 crc kubenswrapper[4829]: E0217 17:22:45.281944 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:22:54 crc kubenswrapper[4829]: E0217 17:22:54.281309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:22:54 crc kubenswrapper[4829]: E0217 17:22:54.281346 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:22:57 crc kubenswrapper[4829]: I0217 17:22:57.280548 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:22:57 crc kubenswrapper[4829]: E0217 17:22:57.281413 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:08 crc kubenswrapper[4829]: E0217 17:23:08.289749 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:09 crc kubenswrapper[4829]: I0217 17:23:09.282504 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:09 crc kubenswrapper[4829]: E0217 17:23:09.283159 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:09 crc kubenswrapper[4829]: E0217 17:23:09.286261 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.275366 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:10 crc kubenswrapper[4829]: E0217 17:23:10.276193 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.276210 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.276449 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fdafba-a123-4ccf-bcde-f3027dcbbf1b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.277794 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.284650 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmblp"/"openshift-service-ca.crt" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.285153 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bmblp"/"default-dockercfg-kqp75" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.285698 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmblp"/"kube-root-ca.crt" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.295782 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.374473 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.375100 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.478161 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.479080 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.479476 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.501276 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"must-gather-bqwqp\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:10 crc kubenswrapper[4829]: I0217 17:23:10.609859 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:23:11 crc kubenswrapper[4829]: I0217 17:23:11.338550 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:23:11 crc kubenswrapper[4829]: I0217 17:23:11.560155 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"36f177bc87d78b91e8368779591515fa213a4d940eb62236187acd5077b3fd85"} Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.497259 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.619438 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.619601 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.790484 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.791019 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.791323 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.893677 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894116 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894364 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894842 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.894849 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.915034 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"certified-operators-7sx62\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:14 crc kubenswrapper[4829]: I0217 17:23:14.956282 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:20 crc kubenswrapper[4829]: E0217 17:23:20.281088 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.333444 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:20 crc kubenswrapper[4829]: W0217 17:23:20.343795 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ec332b_ef73_41c2_8ece_63d68db3a6ac.slice/crio-fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909 WatchSource:0}: Error finding container fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909: Status 404 returned error can't find the container with id fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909 Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.668241 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.668286 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerStarted","Data":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673331 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c" exitCode=0 Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673376 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.673398 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909"} Feb 17 17:23:20 crc kubenswrapper[4829]: I0217 17:23:20.688376 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmblp/must-gather-bqwqp" podStartSLOduration=2.14404433 podStartE2EDuration="10.688353595s" podCreationTimestamp="2026-02-17 17:23:10 +0000 UTC" firstStartedPulling="2026-02-17 17:23:11.344905116 +0000 UTC m=+5303.761923104" lastFinishedPulling="2026-02-17 17:23:19.889214391 +0000 UTC m=+5312.306232369" observedRunningTime="2026-02-17 17:23:20.684385687 +0000 UTC m=+5313.101403685" watchObservedRunningTime="2026-02-17 17:23:20.688353595 +0000 UTC m=+5313.105371573" Feb 17 17:23:21 crc kubenswrapper[4829]: I0217 17:23:21.280141 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:21 crc kubenswrapper[4829]: E0217 17:23:21.280743 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:23:22 crc kubenswrapper[4829]: E0217 17:23:22.281698 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:22 crc kubenswrapper[4829]: I0217 17:23:22.698631 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9"} Feb 17 17:23:23 crc kubenswrapper[4829]: E0217 17:23:23.523627 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ec332b_ef73_41c2_8ece_63d68db3a6ac.slice/crio-db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:23:23 crc kubenswrapper[4829]: I0217 17:23:23.712637 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9" exitCode=0 Feb 17 17:23:23 crc kubenswrapper[4829]: I0217 17:23:23.712695 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9"} Feb 17 17:23:25 crc kubenswrapper[4829]: I0217 17:23:25.738116 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerStarted","Data":"bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105"} Feb 17 17:23:25 crc kubenswrapper[4829]: I0217 17:23:25.764871 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sx62" podStartSLOduration=7.305615414 podStartE2EDuration="11.764849353s" podCreationTimestamp="2026-02-17 17:23:14 +0000 UTC" firstStartedPulling="2026-02-17 17:23:20.675869964 +0000 UTC m=+5313.092887942" lastFinishedPulling="2026-02-17 17:23:25.135103893 +0000 UTC m=+5317.552121881" observedRunningTime="2026-02-17 17:23:25.756763792 +0000 UTC m=+5318.173781790" watchObservedRunningTime="2026-02-17 17:23:25.764849353 +0000 UTC m=+5318.181867331" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.179348 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.181414 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.300880 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.301252 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403673 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403735 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.403804 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.432787 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"crc-debug-qtsp7\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.502991 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:23:28 crc kubenswrapper[4829]: I0217 17:23:28.776392 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerStarted","Data":"2766fa515c7c5536d5585a5a1b48c5ea41cda2a43fa25926248336cd2b999247"} Feb 17 17:23:33 crc kubenswrapper[4829]: I0217 17:23:33.281634 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:23:33 crc kubenswrapper[4829]: E0217 17:23:33.284861 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:33 crc kubenswrapper[4829]: I0217 17:23:33.859371 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} Feb 17 17:23:34 crc kubenswrapper[4829]: I0217 17:23:34.956596 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:34 crc kubenswrapper[4829]: I0217 17:23:34.957178 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:35 crc kubenswrapper[4829]: I0217 17:23:35.097255 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:35 crc kubenswrapper[4829]: E0217 17:23:35.283741 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:35 crc kubenswrapper[4829]: I0217 17:23:35.948104 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:36 crc kubenswrapper[4829]: I0217 17:23:36.005491 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:37 crc kubenswrapper[4829]: I0217 17:23:37.908495 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sx62" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" containerID="cri-o://bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" gracePeriod=2 Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.063023 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.066224 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.076090 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173309 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173446 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.173478 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275514 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275671 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.275705 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.276237 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.278094 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.313873 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"redhat-marketplace-xdjdf\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.398785 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.925628 4829 generic.go:334] "Generic (PLEG): container finished" podID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerID="bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" exitCode=0 Feb 17 17:23:38 crc kubenswrapper[4829]: I0217 17:23:38.925918 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105"} Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.335715 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.481410 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510215 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510318 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.510405 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") pod \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\" (UID: \"f6ec332b-ef73-41c2-8ece-63d68db3a6ac\") " Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.512054 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities" (OuterVolumeSpecName: "utilities") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.517705 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx" (OuterVolumeSpecName: "kube-api-access-msppx") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "kube-api-access-msppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.572160 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ec332b-ef73-41c2-8ece-63d68db3a6ac" (UID: "f6ec332b-ef73-41c2-8ece-63d68db3a6ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.613740 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.614074 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:42 crc kubenswrapper[4829]: I0217 17:23:42.614086 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msppx\" (UniqueName: \"kubernetes.io/projected/f6ec332b-ef73-41c2-8ece-63d68db3a6ac-kube-api-access-msppx\") on node \"crc\" DevicePath \"\"" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007777 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606" exitCode=0 Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007857 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.007885 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerStarted","Data":"4660542cd1e6d1038696b3b3c19f270dc14e3e7daa0c7a582a55fec95b5904de"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010209 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sx62" event={"ID":"f6ec332b-ef73-41c2-8ece-63d68db3a6ac","Type":"ContainerDied","Data":"fe5595b3a239de0783482dde743ff26e6f001ca4d9c6dd339f37690c3535a909"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010265 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sx62" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.010360 4829 scope.go:117] "RemoveContainer" containerID="bdb3fb5bb231eca8a5dfa1ede0759bc80807cde7361972bc3b28e0a678aaa105" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.012988 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerStarted","Data":"829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e"} Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.042993 4829 scope.go:117] "RemoveContainer" containerID="db4a13ea66ccff9c79ffdff94199ef0b5786c51cb2437f6f850a3d69b95333a9" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.068431 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" podStartSLOduration=1.6925776639999999 podStartE2EDuration="15.068410891s" podCreationTimestamp="2026-02-17 17:23:28 +0000 UTC" firstStartedPulling="2026-02-17 17:23:28.540888038 +0000 UTC m=+5320.957906016" lastFinishedPulling="2026-02-17 17:23:41.916721265 +0000 UTC m=+5334.333739243" observedRunningTime="2026-02-17 17:23:43.046163093 +0000 UTC m=+5335.463181081" watchObservedRunningTime="2026-02-17 17:23:43.068410891 +0000 UTC m=+5335.485428869" Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.082920 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.092826 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sx62"] Feb 17 17:23:43 crc kubenswrapper[4829]: I0217 17:23:43.563434 4829 scope.go:117] "RemoveContainer" containerID="ad369c3e60ad77015e758b1ad17605f35f9c1da98db67dd79805a91ded25d10c" Feb 17 17:23:44 crc kubenswrapper[4829]: I0217 17:23:44.294964 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" path="/var/lib/kubelet/pods/f6ec332b-ef73-41c2-8ece-63d68db3a6ac/volumes" Feb 17 17:23:45 crc kubenswrapper[4829]: I0217 17:23:45.037370 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8" exitCode=0 Feb 17 17:23:45 crc kubenswrapper[4829]: I0217 17:23:45.037422 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8"} Feb 17 17:23:45 crc kubenswrapper[4829]: E0217 17:23:45.281522 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:46 crc kubenswrapper[4829]: E0217 17:23:46.281993 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:23:47 crc kubenswrapper[4829]: I0217 17:23:47.061033 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerStarted","Data":"ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681"} Feb 17 17:23:47 crc kubenswrapper[4829]: I0217 17:23:47.082977 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdjdf" podStartSLOduration=6.657288181 podStartE2EDuration="9.082947162s" podCreationTimestamp="2026-02-17 17:23:38 +0000 UTC" firstStartedPulling="2026-02-17 17:23:43.009658327 +0000 UTC m=+5335.426676305" lastFinishedPulling="2026-02-17 17:23:45.435317308 +0000 UTC m=+5337.852335286" observedRunningTime="2026-02-17 17:23:47.077816472 +0000 UTC m=+5339.494834460" watchObservedRunningTime="2026-02-17 17:23:47.082947162 +0000 UTC m=+5339.499965140" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.399643 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.399962 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:48 crc kubenswrapper[4829]: I0217 17:23:48.454350 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:57 crc kubenswrapper[4829]: E0217 17:23:57.282000 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:23:58 crc kubenswrapper[4829]: I0217 17:23:58.458209 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:23:58 crc kubenswrapper[4829]: I0217 17:23:58.542517 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:23:59 crc kubenswrapper[4829]: I0217 17:23:59.203624 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdjdf" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" containerID="cri-o://ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" gracePeriod=2 Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.281089 4829 generic.go:334] "Generic (PLEG): container finished" podID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerID="ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" exitCode=0 Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.340184 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681"} Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.457857 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.517392 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.518025 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.518103 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") pod \"58e44360-7cec-4d73-b5a7-1abc208e7e82\" (UID: \"58e44360-7cec-4d73-b5a7-1abc208e7e82\") " Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.536687 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities" (OuterVolumeSpecName: "utilities") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.543096 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh" (OuterVolumeSpecName: "kube-api-access-df5hh") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "kube-api-access-df5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.553163 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e44360-7cec-4d73-b5a7-1abc208e7e82" (UID: "58e44360-7cec-4d73-b5a7-1abc208e7e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621446 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621485 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e44360-7cec-4d73-b5a7-1abc208e7e82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:00 crc kubenswrapper[4829]: I0217 17:24:00.621497 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5hh\" (UniqueName: \"kubernetes.io/projected/58e44360-7cec-4d73-b5a7-1abc208e7e82-kube-api-access-df5hh\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:01 crc kubenswrapper[4829]: E0217 17:24:01.283126 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.299637 4829 generic.go:334] "Generic (PLEG): container finished" podID="82b76227-c8f4-45e3-a632-0681deb43d58" containerID="829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e" exitCode=0 Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.299790 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" event={"ID":"82b76227-c8f4-45e3-a632-0681deb43d58","Type":"ContainerDied","Data":"829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e"} Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304483 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjdf" event={"ID":"58e44360-7cec-4d73-b5a7-1abc208e7e82","Type":"ContainerDied","Data":"4660542cd1e6d1038696b3b3c19f270dc14e3e7daa0c7a582a55fec95b5904de"} Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304542 4829 scope.go:117] "RemoveContainer" containerID="ea0cfbb480b24c81014242c5546b9afe35fdb5de68abc247c5daecf068d61681" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.304765 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjdf" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.350865 4829 scope.go:117] "RemoveContainer" containerID="e6c0a8bd8672f43269bd7a476a2bb0e3ecead7be4ffc77562e80f5cef2ba2ae8" Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.360550 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.384895 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjdf"] Feb 17 17:24:01 crc kubenswrapper[4829]: I0217 17:24:01.398666 4829 scope.go:117] "RemoveContainer" containerID="942e44b2b0824c292ae342433c767b59e8a8e199c708b91ddcd19ebde8b84606" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.296349 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" path="/var/lib/kubelet/pods/58e44360-7cec-4d73-b5a7-1abc208e7e82/volumes" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.433391 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.465886 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") pod \"82b76227-c8f4-45e3-a632-0681deb43d58\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.465967 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host" (OuterVolumeSpecName: "host") pod "82b76227-c8f4-45e3-a632-0681deb43d58" (UID: "82b76227-c8f4-45e3-a632-0681deb43d58"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.466274 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") pod \"82b76227-c8f4-45e3-a632-0681deb43d58\" (UID: \"82b76227-c8f4-45e3-a632-0681deb43d58\") " Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.467213 4829 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82b76227-c8f4-45e3-a632-0681deb43d58-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.474669 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h" (OuterVolumeSpecName: "kube-api-access-t6x4h") pod "82b76227-c8f4-45e3-a632-0681deb43d58" (UID: "82b76227-c8f4-45e3-a632-0681deb43d58"). InnerVolumeSpecName "kube-api-access-t6x4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.478389 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.504393 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-qtsp7"] Feb 17 17:24:02 crc kubenswrapper[4829]: I0217 17:24:02.569516 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6x4h\" (UniqueName: \"kubernetes.io/projected/82b76227-c8f4-45e3-a632-0681deb43d58-kube-api-access-t6x4h\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.327515 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2766fa515c7c5536d5585a5a1b48c5ea41cda2a43fa25926248336cd2b999247" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.327627 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-qtsp7" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.720509 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721082 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721104 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721123 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721128 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721135 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721141 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721167 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721173 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-utilities" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721187 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721196 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721218 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721225 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="extract-content" Feb 17 17:24:03 crc kubenswrapper[4829]: E0217 17:24:03.721252 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721259 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721460 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ec332b-ef73-41c2-8ece-63d68db3a6ac" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721473 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" containerName="container-00" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.721516 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e44360-7cec-4d73-b5a7-1abc208e7e82" containerName="registry-server" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.722554 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.904486 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:03 crc kubenswrapper[4829]: I0217 17:24:03.904545 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.016845 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.016903 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.017204 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.065535 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"crc-debug-pgwb4\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.291095 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b76227-c8f4-45e3-a632-0681deb43d58" path="/var/lib/kubelet/pods/82b76227-c8f4-45e3-a632-0681deb43d58/volumes" Feb 17 17:24:04 crc kubenswrapper[4829]: I0217 17:24:04.342352 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.349102 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" event={"ID":"af8da55a-65a7-46c1-9af1-545ef9cc95bf","Type":"ContainerStarted","Data":"b2824880faea376d2179d0441ab4ac002a31e2603381c7480f1f7942b463f64f"} Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.689056 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.692715 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.702998 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.860590 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.861199 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.861543 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963723 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963788 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.963996 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.964184 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.964227 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:05 crc kubenswrapper[4829]: I0217 17:24:05.991645 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"community-operators-zxv8d\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.015771 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.370219 4829 generic.go:334] "Generic (PLEG): container finished" podID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerID="ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321" exitCode=1 Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.370450 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" event={"ID":"af8da55a-65a7-46c1-9af1-545ef9cc95bf","Type":"ContainerDied","Data":"ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321"} Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.450624 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.471815 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/crc-debug-pgwb4"] Feb 17 17:24:06 crc kubenswrapper[4829]: I0217 17:24:06.807279 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.385983 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919"} Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.386303 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01"} Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.602245 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.729247 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") pod \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.729467 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") pod \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\" (UID: \"af8da55a-65a7-46c1-9af1-545ef9cc95bf\") " Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.730798 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host" (OuterVolumeSpecName: "host") pod "af8da55a-65a7-46c1-9af1-545ef9cc95bf" (UID: "af8da55a-65a7-46c1-9af1-545ef9cc95bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.749248 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv" (OuterVolumeSpecName: "kube-api-access-96lkv") pod "af8da55a-65a7-46c1-9af1-545ef9cc95bf" (UID: "af8da55a-65a7-46c1-9af1-545ef9cc95bf"). InnerVolumeSpecName "kube-api-access-96lkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.832205 4829 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af8da55a-65a7-46c1-9af1-545ef9cc95bf-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:07 crc kubenswrapper[4829]: I0217 17:24:07.832244 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96lkv\" (UniqueName: \"kubernetes.io/projected/af8da55a-65a7-46c1-9af1-545ef9cc95bf-kube-api-access-96lkv\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.322442 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" path="/var/lib/kubelet/pods/af8da55a-65a7-46c1-9af1-545ef9cc95bf/volumes" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.324012 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:08 crc kubenswrapper[4829]: E0217 17:24:08.324356 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.324371 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.346745 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8da55a-65a7-46c1-9af1-545ef9cc95bf" containerName="container-00" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.348735 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.353795 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.432717 4829 scope.go:117] "RemoveContainer" containerID="ef25d6fdd9b786fc64cf0ef21fc5c7392190e11196687471867c0c8708d6c321" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.432763 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/crc-debug-pgwb4" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.437395 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919" exitCode=0 Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.437435 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919"} Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.450430 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.450539 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.451977 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.557565 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.557957 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.558343 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.559363 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.561982 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.577953 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"redhat-operators-msh7b\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: I0217 17:24:08.696857 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:08 crc kubenswrapper[4829]: E0217 17:24:08.725908 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8da55a_65a7_46c1_9af1_545ef9cc95bf.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:24:09 crc kubenswrapper[4829]: I0217 17:24:09.312732 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:24:09 crc kubenswrapper[4829]: W0217 17:24:09.328605 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b16d00f_7aac_42b2_ba34_9cf5cffbfddc.slice/crio-f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d WatchSource:0}: Error finding container f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d: Status 404 returned error can't find the container with id f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d Feb 17 17:24:09 crc kubenswrapper[4829]: I0217 17:24:09.453636 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d"} Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.471275 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2" exitCode=0 Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.471389 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2"} Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.473755 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:24:10 crc kubenswrapper[4829]: I0217 17:24:10.477933 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0"} Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.489068 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51"} Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.492642 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0" exitCode=0 Feb 17 17:24:11 crc kubenswrapper[4829]: I0217 17:24:11.492693 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0"} Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.404823 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.405171 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.405311 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:12 crc kubenswrapper[4829]: E0217 17:24:12.406529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:12 crc kubenswrapper[4829]: I0217 17:24:12.507255 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerStarted","Data":"87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb"} Feb 17 17:24:12 crc kubenswrapper[4829]: I0217 17:24:12.541189 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxv8d" podStartSLOduration=4.088616378 podStartE2EDuration="7.541171559s" podCreationTimestamp="2026-02-17 17:24:05 +0000 UTC" firstStartedPulling="2026-02-17 17:24:08.484027175 +0000 UTC m=+5360.901045153" lastFinishedPulling="2026-02-17 17:24:11.936582356 +0000 UTC m=+5364.353600334" observedRunningTime="2026-02-17 17:24:12.527806805 +0000 UTC m=+5364.944824793" watchObservedRunningTime="2026-02-17 17:24:12.541171559 +0000 UTC m=+5364.958189537" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.017496 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.018058 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.398900 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.399231 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.399394 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:16 crc kubenswrapper[4829]: E0217 17:24:16.401074 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:16 crc kubenswrapper[4829]: I0217 17:24:16.507116 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:19 crc kubenswrapper[4829]: I0217 17:24:19.593095 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51" exitCode=0 Feb 17 17:24:19 crc kubenswrapper[4829]: I0217 17:24:19.593405 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51"} Feb 17 17:24:20 crc kubenswrapper[4829]: I0217 17:24:20.605338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerStarted","Data":"c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2"} Feb 17 17:24:20 crc kubenswrapper[4829]: I0217 17:24:20.631637 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-msh7b" podStartSLOduration=3.130686858 podStartE2EDuration="12.631616955s" podCreationTimestamp="2026-02-17 17:24:08 +0000 UTC" firstStartedPulling="2026-02-17 17:24:10.473453879 +0000 UTC m=+5362.890471857" lastFinishedPulling="2026-02-17 17:24:19.974383976 +0000 UTC m=+5372.391401954" observedRunningTime="2026-02-17 17:24:20.621371956 +0000 UTC m=+5373.038389934" watchObservedRunningTime="2026-02-17 17:24:20.631616955 +0000 UTC m=+5373.048634933" Feb 17 17:24:24 crc kubenswrapper[4829]: E0217 17:24:24.283935 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.072889 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.128519 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:26 crc kubenswrapper[4829]: I0217 17:24:26.678465 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxv8d" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" containerID="cri-o://87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" gracePeriod=2 Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.715800 4829 generic.go:334] "Generic (PLEG): container finished" podID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerID="87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" exitCode=0 Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.715883 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb"} Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.716053 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxv8d" event={"ID":"eeb860ed-6cd7-4618-8ea7-158f7e3251d8","Type":"ContainerDied","Data":"59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01"} Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.716071 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59baaafedf25da29107ccbd0aca8b50c9c022efb6c96c2847f978bb865676b01" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.787778 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.844841 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.845166 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.845292 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") pod \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\" (UID: \"eeb860ed-6cd7-4618-8ea7-158f7e3251d8\") " Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.850003 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities" (OuterVolumeSpecName: "utilities") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.852846 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf" (OuterVolumeSpecName: "kube-api-access-phglf") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "kube-api-access-phglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.908524 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeb860ed-6cd7-4618-8ea7-158f7e3251d8" (UID: "eeb860ed-6cd7-4618-8ea7-158f7e3251d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948248 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948310 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4829]: I0217 17:24:27.948325 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phglf\" (UniqueName: \"kubernetes.io/projected/eeb860ed-6cd7-4618-8ea7-158f7e3251d8-kube-api-access-phglf\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.697861 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.698235 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.726095 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxv8d" Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.757044 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:28 crc kubenswrapper[4829]: I0217 17:24:28.767476 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxv8d"] Feb 17 17:24:29 crc kubenswrapper[4829]: E0217 17:24:29.281326 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:29 crc kubenswrapper[4829]: I0217 17:24:29.763180 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:29 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:29 crc kubenswrapper[4829]: > Feb 17 17:24:30 crc kubenswrapper[4829]: I0217 17:24:30.294671 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" path="/var/lib/kubelet/pods/eeb860ed-6cd7-4618-8ea7-158f7e3251d8/volumes" Feb 17 17:24:36 crc kubenswrapper[4829]: E0217 17:24:36.281823 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:39 crc kubenswrapper[4829]: I0217 17:24:39.754117 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:39 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:39 crc kubenswrapper[4829]: > Feb 17 17:24:40 crc kubenswrapper[4829]: E0217 17:24:40.285040 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:49 crc kubenswrapper[4829]: E0217 17:24:49.290949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:24:49 crc kubenswrapper[4829]: I0217 17:24:49.745357 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:49 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:49 crc kubenswrapper[4829]: > Feb 17 17:24:55 crc kubenswrapper[4829]: E0217 17:24:55.282204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.756262 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.811765 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:24:58 crc kubenswrapper[4829]: I0217 17:24:58.996928 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:00 crc kubenswrapper[4829]: I0217 17:25:00.038527 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-msh7b" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" containerID="cri-o://c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" gracePeriod=2 Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.050190 4829 generic.go:334] "Generic (PLEG): container finished" podID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerID="c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" exitCode=0 Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.050270 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2"} Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.276223 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:25:01 crc kubenswrapper[4829]: E0217 17:25:01.281140 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.389847 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390044 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390148 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") pod \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\" (UID: \"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc\") " Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.390803 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities" (OuterVolumeSpecName: "utilities") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.391436 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.397913 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp" (OuterVolumeSpecName: "kube-api-access-qd7vp") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "kube-api-access-qd7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.493797 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd7vp\" (UniqueName: \"kubernetes.io/projected/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-kube-api-access-qd7vp\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.521611 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" (UID: "6b16d00f-7aac-42b2-ba34-9cf5cffbfddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:01 crc kubenswrapper[4829]: I0217 17:25:01.595731 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.062989 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-msh7b" event={"ID":"6b16d00f-7aac-42b2-ba34-9cf5cffbfddc","Type":"ContainerDied","Data":"f19ce100dd059ced02806b372a6277eb335973975a343a30c505709e3be7d40d"} Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.063072 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-msh7b" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.063302 4829 scope.go:117] "RemoveContainer" containerID="c085f9d7f31e69d08b21fd337acfe2370e8c96adbaaf8e48f9d3a7e7b65691c2" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.100805 4829 scope.go:117] "RemoveContainer" containerID="3827a3c37db077801a27dc83ba9c9bd382ee5ee54b2e46fa9feeb225ac795e51" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.108059 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.118660 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-msh7b"] Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.148809 4829 scope.go:117] "RemoveContainer" containerID="f107c40e48927d93cce3bee8bac91fc3d173436e04a697bae13caca92c81afe2" Feb 17 17:25:02 crc kubenswrapper[4829]: I0217 17:25:02.295266 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" path="/var/lib/kubelet/pods/6b16d00f-7aac-42b2-ba34-9cf5cffbfddc/volumes" Feb 17 17:25:06 crc kubenswrapper[4829]: E0217 17:25:06.282309 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:12 crc kubenswrapper[4829]: E0217 17:25:12.282345 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:17 crc kubenswrapper[4829]: E0217 17:25:17.284200 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.138563 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-api/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.425872 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-evaluator/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.507287 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-listener/0.log" Feb 17 17:25:23 crc kubenswrapper[4829]: I0217 17:25:23.584971 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_58d7c5e4-0195-41e6-afd9-9f31d6472d61/aodh-notifier/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: E0217 17:25:24.286044 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.488752 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-744588c6bd-fsx8x_652438ae-668e-4017-a88c-c6737fd0db78/barbican-api/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.506913 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-744588c6bd-fsx8x_652438ae-668e-4017-a88c-c6737fd0db78/barbican-api-log/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.685185 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b9b6dfd6-gq6hn_5f483139-9fb6-4db6-8c40-846d8bd69556/barbican-keystone-listener/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.752769 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b9b6dfd6-gq6hn_5f483139-9fb6-4db6-8c40-846d8bd69556/barbican-keystone-listener-log/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.833386 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765797c7c9-2cts6_87043d23-60bf-443c-8db4-2679d7269f6c/barbican-worker/0.log" Feb 17 17:25:24 crc kubenswrapper[4829]: I0217 17:25:24.907610 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765797c7c9-2cts6_87043d23-60bf-443c-8db4-2679d7269f6c/barbican-worker-log/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.097072 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-thfkj_9f00333b-9c18-4a8c-b409-2961da9afccc/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.293360 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/proxy-httpd/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.328932 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/ceilometer-notification-agent/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.435508 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e01f505e-09de-4b7d-ae8a-b9f392c3b592/sg-core/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.522203 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_816bca39-deec-496c-bb97-40d4ad4ca878/cinder-api-log/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.608449 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_816bca39-deec-496c-bb97-40d4ad4ca878/cinder-api/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.733991 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0feacb21-5300-40f2-bee7-fac4613c2977/cinder-scheduler/0.log" Feb 17 17:25:25 crc kubenswrapper[4829]: I0217 17:25:25.833814 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0feacb21-5300-40f2-bee7-fac4613c2977/probe/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.502015 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/init/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.726878 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/init/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.815029 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hfgfn_de1b2a48-73a6-48b7-94d8-1c24530f4d2b/dnsmasq-dns/0.log" Feb 17 17:25:26 crc kubenswrapper[4829]: I0217 17:25:26.899059 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bp7df_30690071-6fc2-4647-82c0-6e5234005aec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.087228 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fwv9q_60a577ad-f610-459b-9f2d-19c6bc6f356a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.170609 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-mjgb5_9a6550f4-cdf2-4365-8ce4-96642f12822f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.346194 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pwplj_5e8ebd2e-8bc3-40dd-bd0d-e3efca982b64/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.610997 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qb9pw_70fdafba-a123-4ccf-bcde-f3027dcbbf1b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.755544 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-v8r24_6a1c73d0-1366-47dc-9726-b2a5d6ed3b86/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:27 crc kubenswrapper[4829]: I0217 17:25:27.922722 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vzxlt_c0fd9f61-596b-4ef3-b6da-6ebe6b04d497/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.004850 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_417e614d-4be6-439c-9fbc-65e970d1614f/glance-httpd/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.037526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_417e614d-4be6-439c-9fbc-65e970d1614f/glance-log/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.207051 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4708c572-1818-4307-8667-0e2cb60f5635/glance-log/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.218434 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4708c572-1818-4307-8667-0e2cb60f5635/glance-httpd/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.802283 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7bf669c95c-g7msn_be43e34b-d8ec-44cd-bc26-e0ce3c9797a7/heat-api/0.log" Feb 17 17:25:28 crc kubenswrapper[4829]: I0217 17:25:28.984519 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7db87d5bbf-dtdjh_59de3866-adfb-4a8d-87f2-b54af38332d0/heat-engine/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.061112 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-66bc7b8984-mg8sc_5dfe4b1a-5f10-47f3-ab81-0807c468fab0/heat-cfnapi/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.175487 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-868ff7b66c-lx7qv_c2a8da85-ca3d-4368-8a34-4db948e7f6f3/keystone-api/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.247818 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-jp96w_7522621b-701f-4bef-8232-25fb5b8abab1/keystone-cron/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.311481 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f57285ef-f362-4fb7-8f6c-633698507b3d/kube-state-metrics/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.543525 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e39a0dce-4da5-4ff4-9e50-e2dc41d22092/mysqld-exporter/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.828159 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5598cc6dcc-p2b29_298e03dd-93bc-4a68-8589-ecec2278efd5/neutron-api/0.log" Feb 17 17:25:29 crc kubenswrapper[4829]: I0217 17:25:29.875138 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5598cc6dcc-p2b29_298e03dd-93bc-4a68-8589-ecec2278efd5/neutron-httpd/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.387601 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_62d7182c-e529-468f-8022-9fd5fc66b554/nova-api-log/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.401332 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8f709715-5e80-4988-8eb5-8bebcd673c47/nova-cell0-conductor-conductor/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.532094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_62d7182c-e529-468f-8022-9fd5fc66b554/nova-api-api/0.log" Feb 17 17:25:30 crc kubenswrapper[4829]: I0217 17:25:30.919507 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_abe67602-ae51-43a0-b450-af654c573d9a/nova-cell1-conductor-conductor/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.012728 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fa5f0bda-7dee-4ea8-9b6c-ec30ce341044/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.131747 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e0afa824-7a82-41cc-9274-28689e2f3f57/nova-metadata-log/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: E0217 17:25:31.281674 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.497352 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_37d63bbb-2d26-4b85-8241-2785a5194a21/nova-scheduler-scheduler/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.557716 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/mysql-bootstrap/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.806552 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/mysql-bootstrap/0.log" Feb 17 17:25:31 crc kubenswrapper[4829]: I0217 17:25:31.863447 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3949cc3c-e03d-42b7-b07f-dbdce94d7283/galera/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.044286 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/mysql-bootstrap/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.293805 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/mysql-bootstrap/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.364083 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_903a9538-3e9d-4567-a9c2-0eeaaf450b85/galera/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.521802 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4561ce68-ba71-42ad-95ec-de8b705a06ef/openstackclient/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.652694 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-75gff_e5adca8d-ac72-45d0-aa1c-3c453a78620e/ovn-controller/0.log" Feb 17 17:25:32 crc kubenswrapper[4829]: I0217 17:25:32.887596 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2hx8h_60f8527d-9ed8-4ea4-97f9-6c5f5d3fc088/openstack-network-exporter/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.127242 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server-init/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.199736 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e0afa824-7a82-41cc-9274-28689e2f3f57/nova-metadata-metadata/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.345516 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server-init/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.350548 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovsdb-server/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.397434 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kwz7l_741f1fbb-0699-4bb0-b46e-6eaa47595170/ovs-vswitchd/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.777363 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_add70c30-2098-4686-bd7d-f693219a63b8/openstack-network-exporter/0.log" Feb 17 17:25:33 crc kubenswrapper[4829]: I0217 17:25:33.834718 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_add70c30-2098-4686-bd7d-f693219a63b8/ovn-northd/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.025915 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b04054b-6716-42c5-8e1b-d7eba2bcfe4c/openstack-network-exporter/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.041333 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2b04054b-6716-42c5-8e1b-d7eba2bcfe4c/ovsdbserver-nb/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.167058 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2eeefec2-2e41-4278-8c9d-889dbf5f51ea/openstack-network-exporter/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.811146 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2eeefec2-2e41-4278-8c9d-889dbf5f51ea/ovsdbserver-sb/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.870827 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8b56fc4d-7pnvr_504197ea-58c2-445f-96a1-4b812028425d/placement-api/0.log" Feb 17 17:25:34 crc kubenswrapper[4829]: I0217 17:25:34.885231 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8b56fc4d-7pnvr_504197ea-58c2-445f-96a1-4b812028425d/placement-log/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.104954 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/init-config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: E0217 17:25:35.281250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.374742 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/init-config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.384783 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/prometheus/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.394773 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/thanos-sidecar/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.405005 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0afff9a0-fd8a-4388-903e-647ae66128db/config-reloader/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.614008 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.892661 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.937415 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/setup-container/0.log" Feb 17 17:25:35 crc kubenswrapper[4829]: I0217 17:25:35.963787 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4c6b5337-789c-48a9-b772-3d96b64640e6/rabbitmq/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.188450 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/rabbitmq/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.190587 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_feaa3649-f3db-44ac-8054-cd13296c0845/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.191089 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.960324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/setup-container/0.log" Feb 17 17:25:36 crc kubenswrapper[4829]: I0217 17:25:36.987969 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_342647d1-5339-47e5-b35c-80b4406a2ea6/rabbitmq/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.006431 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/setup-container/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.341667 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/setup-container/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.365657 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vzzfp_fa5fdc9d-b2a6-4381-ac10-bd9ec9eee66e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.372977 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_13860a28-5cd6-4bf9-b60b-3872c76444a8/rabbitmq/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.634030 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-m5l2t_2b2909c1-2feb-4fa2-8a7e-e406334ade24/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.841172 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-84gsz_81b1a5c5-d463-48ba-b0d2-4409299812cb/swift-ring-rebalance/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.884379 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d69d97dcf-pdd69_cd5d005a-eb7a-4cbc-932f-2640cb8068eb/proxy-server/0.log" Feb 17 17:25:37 crc kubenswrapper[4829]: I0217 17:25:37.912729 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d69d97dcf-pdd69_cd5d005a-eb7a-4cbc-932f-2640cb8068eb/proxy-httpd/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.124287 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-reaper/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.154939 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.270994 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.322838 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/account-server/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.430274 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.492930 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.575938 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-server/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.681913 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/container-updater/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.773526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-auditor/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.780987 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-expirer/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.831766 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-replicator/0.log" Feb 17 17:25:38 crc kubenswrapper[4829]: I0217 17:25:38.931288 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-server/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.017118 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/rsync/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.026577 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/object-updater/0.log" Feb 17 17:25:39 crc kubenswrapper[4829]: I0217 17:25:39.122776 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5f22317f-8a58-4b93-b29f-a0e585ac48a9/swift-recon-cron/0.log" Feb 17 17:25:44 crc kubenswrapper[4829]: I0217 17:25:44.479759 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4e3198cb-0642-46be-a9e3-33db29446377/memcached/0.log" Feb 17 17:25:46 crc kubenswrapper[4829]: E0217 17:25:46.282731 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:25:49 crc kubenswrapper[4829]: E0217 17:25:49.282419 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:25:52 crc kubenswrapper[4829]: I0217 17:25:52.424669 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:25:52 crc kubenswrapper[4829]: I0217 17:25:52.425054 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:01 crc kubenswrapper[4829]: E0217 17:26:01.283621 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:04 crc kubenswrapper[4829]: E0217 17:26:04.282745 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.638690 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.884735 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.911056 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:11 crc kubenswrapper[4829]: I0217 17:26:11.911291 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.109919 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/pull/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.110458 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/util/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.161992 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4c8bhj_585600e7-9faf-493f-ac02-1e8e489f6955/extract/0.log" Feb 17 17:26:12 crc kubenswrapper[4829]: I0217 17:26:12.614548 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-shssw_a711806b-ee8c-4fb8-b5da-da5e90ef06c6/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.046032 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7j8p7_bb32d7a2-68ff-4511-a04f-fa09657791db/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.484437 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9md4j_dd52262f-900a-4801-8c4c-f79787b6b715/manager/0.log" Feb 17 17:26:13 crc kubenswrapper[4829]: I0217 17:26:13.583805 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-hmtfv_84a22a6b-1fb5-4959-9342-0bcc4b033b68/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.440316 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-t57qn_60ea5425-d352-4d97-bedf-f01d07c89949/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.491850 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-vxvp7_0e275e91-4b6e-419e-b076-a6e221f8a8ac/manager/0.log" Feb 17 17:26:14 crc kubenswrapper[4829]: I0217 17:26:14.879380 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-nksk9_62cfcaa0-5c8a-4a67-95b7-83aa695a8640/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.157179 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-fw4gg_8642cada-3458-43cc-90aa-cf66a1cd6426/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.468371 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-gcxk7_5b6c89f9-2c4f-4bab-8d8b-cd746acb3426/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.478344 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-w97sk_f3add145-231f-4d7b-b9dd-115026b2a05e/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.786298 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-m4df4_3aab9223-4e3f-4657-afc2-91d0e0948542/manager/0.log" Feb 17 17:26:15 crc kubenswrapper[4829]: I0217 17:26:15.936749 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-czbvb_f083cb81-0369-46de-9562-406736ae7e2f/manager/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: E0217 17:26:16.289686 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:16 crc kubenswrapper[4829]: E0217 17:26:16.289728 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.311946 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cbtkkx_a1ec01cb-62ae-4855-b830-69f896bfb5a4/manager/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.805289 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-64549bfd8b-ksr2v_f5adeb4d-89fb-480c-a429-7cf978198db2/operator/0.log" Feb 17 17:26:16 crc kubenswrapper[4829]: I0217 17:26:16.993285 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6p47w_24ddb2b4-4194-4df5-8820-9ea9c405abc7/registry-server/0.log" Feb 17 17:26:17 crc kubenswrapper[4829]: I0217 17:26:17.356009 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-mnrxb_72028d3b-7fd0-4b17-b0c2-c92bc7134637/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.357382 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-274tg_958dea67-d633-4f5c-a18e-2aca1a55020c/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.588534 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fht2z_eaf75815-7964-4bc0-aeae-d3306764d7f4/operator/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.786821 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-546d579865-h84k8_aa745829-0443-47a5-8c10-701bd4645505/manager/0.log" Feb 17 17:26:18 crc kubenswrapper[4829]: I0217 17:26:18.872446 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-thspt_4edb58e7-9b2a-4b5e-aabb-4fe8bd988dd3/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.356705 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zbs8b_23c03a71-fe86-47ad-ae4b-dd49bc07f2b0/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.626649 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-ndxcg_2237138f-4450-415b-9646-c2ab9f88194a/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.656773 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-2xmzw_5239a5a9-e318-4db3-8394-0427d57d4ae5/manager/0.log" Feb 17 17:26:19 crc kubenswrapper[4829]: I0217 17:26:19.757712 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66fcc5ff49-8lb5d_584ed73b-c202-4d41-b884-cd9c279b3c0d/manager/0.log" Feb 17 17:26:22 crc kubenswrapper[4829]: I0217 17:26:22.424059 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:22 crc kubenswrapper[4829]: I0217 17:26:22.425636 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:26 crc kubenswrapper[4829]: I0217 17:26:26.125365 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-dlskg_6084260e-35c2-43b5-9606-98e1e0463e98/manager/0.log" Feb 17 17:26:31 crc kubenswrapper[4829]: E0217 17:26:31.282250 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:31 crc kubenswrapper[4829]: E0217 17:26:31.282369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:44 crc kubenswrapper[4829]: E0217 17:26:44.282655 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:26:46 crc kubenswrapper[4829]: E0217 17:26:46.282680 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.292311 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sqmls_2bfb2da7-1a85-42f9-8c3f-c7997e85dd58/control-plane-machine-set-operator/0.log" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.408151 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-47kpc_e8a98667-8884-4056-8577-3e7db8762ff9/kube-rbac-proxy/0.log" Feb 17 17:26:47 crc kubenswrapper[4829]: I0217 17:26:47.521382 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-47kpc_e8a98667-8884-4056-8577-3e7db8762ff9/machine-api-operator/0.log" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.424625 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.425241 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.425300 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.426309 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:26:52 crc kubenswrapper[4829]: I0217 17:26:52.426380 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" gracePeriod=600 Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.401941 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" exitCode=0 Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402066 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4"} Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402839 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} Feb 17 17:26:53 crc kubenswrapper[4829]: I0217 17:26:53.402875 4829 scope.go:117] "RemoveContainer" containerID="a29f062a34b0cf5072df71e74727f19a1e589843b5dc22ef5e453ecac2956e80" Feb 17 17:26:58 crc kubenswrapper[4829]: E0217 17:26:58.291758 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:00 crc kubenswrapper[4829]: E0217 17:27:00.280842 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.183840 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mf5jl_476f8c4d-b180-40c8-b5a7-120565b0789f/cert-manager-controller/0.log" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.369704 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-29pr5_90365502-e574-4c31-b97b-ca69aac75648/cert-manager-cainjector/0.log" Feb 17 17:27:03 crc kubenswrapper[4829]: I0217 17:27:03.434817 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rzvp5_dc500c7f-2cf7-447f-ae9e-f22211c1d4ad/cert-manager-webhook/0.log" Feb 17 17:27:12 crc kubenswrapper[4829]: E0217 17:27:12.282058 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:15 crc kubenswrapper[4829]: E0217 17:27:15.282591 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.203564 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-mchvp_df7e3d75-f36c-4258-ae86-6bb72db7c0e4/nmstate-console-plugin/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.374938 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-47lp4_4e62a7c0-ac99-4dd8-a587-58c98adb3a25/nmstate-handler/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.467969 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-85cbd_20b39811-2839-4b55-a69e-a293416edb22/kube-rbac-proxy/0.log" Feb 17 17:27:21 crc kubenswrapper[4829]: I0217 17:27:21.541312 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-85cbd_20b39811-2839-4b55-a69e-a293416edb22/nmstate-metrics/0.log" Feb 17 17:27:22 crc kubenswrapper[4829]: I0217 17:27:22.419176 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-v2bww_55a7b0a0-24f0-4b6b-82bf-f131f831af3a/nmstate-webhook/0.log" Feb 17 17:27:22 crc kubenswrapper[4829]: I0217 17:27:22.444077 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-lpfx5_e597d80c-fb6d-45a3-9b01-4a32a59f07a6/nmstate-operator/0.log" Feb 17 17:27:24 crc kubenswrapper[4829]: E0217 17:27:24.281014 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:29 crc kubenswrapper[4829]: E0217 17:27:29.283037 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:38 crc kubenswrapper[4829]: I0217 17:27:38.145912 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/kube-rbac-proxy/0.log" Feb 17 17:27:38 crc kubenswrapper[4829]: I0217 17:27:38.175173 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/manager/0.log" Feb 17 17:27:39 crc kubenswrapper[4829]: E0217 17:27:39.282636 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:44 crc kubenswrapper[4829]: E0217 17:27:44.281301 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:27:53 crc kubenswrapper[4829]: E0217 17:27:53.281755 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.122166 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwcb6_edb49e50-f230-48c5-b2e5-fe59a3ae73fa/prometheus-operator/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.275454 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-6q6r7_54e12496-0dd9-43a5-accb-e17546b7b715/prometheus-operator-admission-webhook/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.375288 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-vsf4q_a3ae1cd0-485d-4d83-8601-79d0c99bf9e8/prometheus-operator-admission-webhook/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.516282 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9xj96_9d3431d3-b6f2-4658-b45c-c428b77e98df/operator/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.577066 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vtctx_54f57142-2ddb-4c2f-a68e-ab77ff965e8c/observability-ui-dashboards/0.log" Feb 17 17:27:55 crc kubenswrapper[4829]: I0217 17:27:55.734196 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f6t4s_dd120281-015e-45a4-b1ae-f868b2326499/perses-operator/0.log" Feb 17 17:27:57 crc kubenswrapper[4829]: E0217 17:27:57.281543 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:07 crc kubenswrapper[4829]: E0217 17:28:07.281204 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:09 crc kubenswrapper[4829]: E0217 17:28:09.282386 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:13 crc kubenswrapper[4829]: I0217 17:28:13.851319 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-csdvg_54232488-a26b-4bdf-8b89-381241b92b54/cluster-logging-operator/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.049065 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_c7dd4bfd-add5-4b6b-a938-5e8ae8433d10/loki-compactor/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.057506 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-j7l9k_768f24d9-7e75-4b78-a2a7-10cdfd579577/collector/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.232005 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-knrkx_3e78e45a-c46f-4cfd-a487-56fad3cb0649/loki-distributor/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.261430 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-6lhvz_52de54a3-9f80-412c-a925-25541914e2b0/gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.375013 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-6lhvz_52de54a3-9f80-412c-a925-25541914e2b0/opa/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.453158 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-8xxq9_38a2308f-5d3c-4dac-b105-3d42a6b7bdd1/gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.480768 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6d6859d459-8xxq9_38a2308f-5d3c-4dac-b105-3d42a6b7bdd1/opa/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.625202 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7bf847ac-1d33-4bad-8882-4661d8f33da8/loki-index-gateway/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.773267 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_a7c5b31c-f45c-4a04-afc1-251ef93e471a/loki-ingester/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.838142 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-w7bl4_76340faf-b2e5-461e-9172-a03eee715830/loki-querier/0.log" Feb 17 17:28:14 crc kubenswrapper[4829]: I0217 17:28:14.996876 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-7v4zj_90856a62-8a7f-479c-af7e-a95b8292618a/loki-query-frontend/0.log" Feb 17 17:28:20 crc kubenswrapper[4829]: E0217 17:28:20.284666 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:21 crc kubenswrapper[4829]: E0217 17:28:21.281478 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:32 crc kubenswrapper[4829]: I0217 17:28:32.600076 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-g4znl_1da62b69-54b6-4041-885f-acda828405c9/kube-rbac-proxy/0.log" Feb 17 17:28:32 crc kubenswrapper[4829]: I0217 17:28:32.791449 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-g4znl_1da62b69-54b6-4041-885f-acda828405c9/controller/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.324212 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.508100 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.525094 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.551742 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.620716 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.819236 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.831713 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.841133 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:33 crc kubenswrapper[4829]: I0217 17:28:33.851292 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.083652 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/controller/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.092293 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-frr-files/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.092785 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.100728 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/cp-reloader/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: E0217 17:28:34.280838 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:34 crc kubenswrapper[4829]: E0217 17:28:34.282757 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.360917 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/frr-metrics/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.361810 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/kube-rbac-proxy/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.387686 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/kube-rbac-proxy-frr/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.597498 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/reloader/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.638774 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-l8gzk_8ddfc374-12f8-443a-bcc1-526613e031bf/frr-k8s-webhook-server/0.log" Feb 17 17:28:34 crc kubenswrapper[4829]: I0217 17:28:34.857987 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848c6d5b-p864p_c5cf20c6-9fae-4c85-9c16-53e313c04cda/manager/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.073158 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6bd8598c46-74wvs_90b368e2-73a9-4594-8428-e17a7bb1e499/webhook-server/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.231895 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gr6k_a25680cc-e984-4ad7-95e2-3fe561a5fa8c/kube-rbac-proxy/0.log" Feb 17 17:28:35 crc kubenswrapper[4829]: I0217 17:28:35.933108 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gr6k_a25680cc-e984-4ad7-95e2-3fe561a5fa8c/speaker/0.log" Feb 17 17:28:36 crc kubenswrapper[4829]: I0217 17:28:36.098706 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7qwft_901c7cfc-f3f1-470c-bd1f-47ab57bb1b53/frr/0.log" Feb 17 17:28:45 crc kubenswrapper[4829]: E0217 17:28:45.298709 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:28:48 crc kubenswrapper[4829]: E0217 17:28:48.290113 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:28:49 crc kubenswrapper[4829]: I0217 17:28:49.942699 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.176860 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.190200 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.203213 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.389616 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/extract/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.404207 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.404299 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19h7hdj_ee1e8312-b6e2-431a-a9b5-e16c1bb04b8b/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.581270 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.761006 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.797866 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:50 crc kubenswrapper[4829]: I0217 17:28:50.804991 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.025799 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/extract/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.054276 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.067674 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ftn2n_a1ffb98f-3b96-4b10-9f6b-7fa5b840d460/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.231140 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.400909 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.447170 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.455409 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.646489 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/extract/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.662172 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/pull/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.678272 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213tf5px_63ecbb28-5618-4f33-9125-c0372c407b89/util/0.log" Feb 17 17:28:51 crc kubenswrapper[4829]: I0217 17:28:51.866106 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.083519 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.087194 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.090405 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.328526 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.331452 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/extract-content/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.424167 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.424236 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.657077 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.880132 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:52 crc kubenswrapper[4829]: I0217 17:28:52.944972 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.086217 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.235873 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xgnph_11288751-f708-4745-96fa-625be709d265/registry-server/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.271536 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-content/0.log" Feb 17 17:28:53 crc kubenswrapper[4829]: I0217 17:28:53.294774 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/extract-utilities/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.168590 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.326985 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.415566 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.449637 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.723632 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/extract/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.774937 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/pull/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.819049 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hbzjz_c5571b57-495c-43ce-88ed-ec6f10e58839/util/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.826301 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vvk9j_65b3d23b-0d04-496a-9dbb-fb4ed59d313b/registry-server/0.log" Feb 17 17:28:54 crc kubenswrapper[4829]: I0217 17:28:54.925240 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.187698 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.187847 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.217525 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.943832 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/util/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.948200 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/pull/0.log" Feb 17 17:28:55 crc kubenswrapper[4829]: I0217 17:28:55.994823 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapj2rl_2f38714a-d191-4850-8b52-257b43af4a40/extract/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.066113 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dk6vq_1ab6fa1e-fad5-43cf-b55f-be2dd2d71cf9/marketplace-operator/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.149405 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.335403 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.341122 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.369839 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.629997 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.637372 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.669368 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.877835 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2sjn_2b134949-3436-4e61-9649-5704b6bcb7fd/registry-server/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.926995 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.943059 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:56 crc kubenswrapper[4829]: I0217 17:28:56.989130 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:57 crc kubenswrapper[4829]: I0217 17:28:57.179702 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-utilities/0.log" Feb 17 17:28:57 crc kubenswrapper[4829]: I0217 17:28:57.220910 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/extract-content/0.log" Feb 17 17:28:58 crc kubenswrapper[4829]: I0217 17:28:57.999803 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h59n9_b1207e9e-0755-423d-9a3d-b83ded02c8c2/registry-server/0.log" Feb 17 17:29:00 crc kubenswrapper[4829]: E0217 17:29:00.282114 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:01 crc kubenswrapper[4829]: E0217 17:29:01.295446 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:12 crc kubenswrapper[4829]: E0217 17:29:12.283885 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.283254 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.395765 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.396082 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.396208 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:15 crc kubenswrapper[4829]: E0217 17:29:15.397399 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.571949 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwcb6_edb49e50-f230-48c5-b2e5-fe59a3ae73fa/prometheus-operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.613686 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-6q6r7_54e12496-0dd9-43a5-accb-e17546b7b715/prometheus-operator-admission-webhook/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.633259 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6bb447465-vsf4q_a3ae1cd0-485d-4d83-8601-79d0c99bf9e8/prometheus-operator-admission-webhook/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.775254 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9xj96_9d3431d3-b6f2-4658-b45c-c428b77e98df/operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.878224 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f6t4s_dd120281-015e-45a4-b1ae-f868b2326499/perses-operator/0.log" Feb 17 17:29:15 crc kubenswrapper[4829]: I0217 17:29:15.885324 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-vtctx_54f57142-2ddb-4c2f-a68e-ab77ff965e8c/observability-ui-dashboards/0.log" Feb 17 17:29:22 crc kubenswrapper[4829]: I0217 17:29:22.425499 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:22 crc kubenswrapper[4829]: I0217 17:29:22.426206 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.414822 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.415287 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.415419 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:27 crc kubenswrapper[4829]: E0217 17:29:27.416607 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:29 crc kubenswrapper[4829]: E0217 17:29:29.283379 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:31 crc kubenswrapper[4829]: I0217 17:29:31.969675 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/kube-rbac-proxy/0.log" Feb 17 17:29:32 crc kubenswrapper[4829]: I0217 17:29:32.009229 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5c6bf5887b-ljvq2_d845044e-d849-405d-a6ef-c2d76a5abba6/manager/0.log" Feb 17 17:29:39 crc kubenswrapper[4829]: E0217 17:29:39.283926 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:42 crc kubenswrapper[4829]: E0217 17:29:42.282151 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427140 4829 patch_prober.go:28] interesting pod/machine-config-daemon-fzwcw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427881 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.427941 4829 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.428985 4829 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.429053 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerName="machine-config-daemon" containerID="cri-o://2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" gracePeriod=600 Feb 17 17:29:52 crc kubenswrapper[4829]: E0217 17:29:52.591012 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930481 4829 generic.go:334] "Generic (PLEG): container finished" podID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" exitCode=0 Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930524 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerDied","Data":"2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39"} Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.930563 4829 scope.go:117] "RemoveContainer" containerID="2cca88b97a22dbe6fb133610ed93024c7927fa22a8c805a1eca2785987f0a0d4" Feb 17 17:29:52 crc kubenswrapper[4829]: I0217 17:29:52.931537 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:29:52 crc kubenswrapper[4829]: E0217 17:29:52.932189 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:29:54 crc kubenswrapper[4829]: E0217 17:29:54.282977 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:29:57 crc kubenswrapper[4829]: E0217 17:29:57.281183 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.190428 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191577 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191641 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191669 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191676 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191704 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191710 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191723 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191729 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191744 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191750 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4829]: E0217 17:30:00.191770 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.191776 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.192049 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b16d00f-7aac-42b2-ba34-9cf5cffbfddc" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.192110 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb860ed-6cd7-4618-8ea7-158f7e3251d8" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.193124 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.198934 4829 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.199177 4829 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.223602 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251070 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251520 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.251730 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.354843 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.355090 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.355428 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.357172 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.362216 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.380498 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"collect-profiles-29522490-szp66\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:00 crc kubenswrapper[4829]: I0217 17:30:00.532754 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:01 crc kubenswrapper[4829]: I0217 17:30:01.180144 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66"] Feb 17 17:30:01 crc kubenswrapper[4829]: W0217 17:30:01.197629 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7afba793_475b_494e_9c36_7e080ebc391b.slice/crio-aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a WatchSource:0}: Error finding container aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a: Status 404 returned error can't find the container with id aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.098838 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerStarted","Data":"0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3"} Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.099202 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerStarted","Data":"aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a"} Feb 17 17:30:02 crc kubenswrapper[4829]: I0217 17:30:02.127288 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" podStartSLOduration=2.127260952 podStartE2EDuration="2.127260952s" podCreationTimestamp="2026-02-17 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:30:02.126185993 +0000 UTC m=+5714.543203971" watchObservedRunningTime="2026-02-17 17:30:02.127260952 +0000 UTC m=+5714.544278930" Feb 17 17:30:03 crc kubenswrapper[4829]: I0217 17:30:03.111397 4829 generic.go:334] "Generic (PLEG): container finished" podID="7afba793-475b-494e-9c36-7e080ebc391b" containerID="0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3" exitCode=0 Feb 17 17:30:03 crc kubenswrapper[4829]: I0217 17:30:03.111787 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerDied","Data":"0563ae2a7392234b64cadc5981d2414e0be225686ece6c592818b1d84f514fe3"} Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.793637 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959104 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959691 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.959823 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") pod \"7afba793-475b-494e-9c36-7e080ebc391b\" (UID: \"7afba793-475b-494e-9c36-7e080ebc391b\") " Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.960497 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume" (OuterVolumeSpecName: "config-volume") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.961106 4829 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7afba793-475b-494e-9c36-7e080ebc391b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.968844 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4829]: I0217 17:30:04.969029 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls" (OuterVolumeSpecName: "kube-api-access-vs4ls") pod "7afba793-475b-494e-9c36-7e080ebc391b" (UID: "7afba793-475b-494e-9c36-7e080ebc391b"). InnerVolumeSpecName "kube-api-access-vs4ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.064081 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4ls\" (UniqueName: \"kubernetes.io/projected/7afba793-475b-494e-9c36-7e080ebc391b-kube-api-access-vs4ls\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.064127 4829 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7afba793-475b-494e-9c36-7e080ebc391b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136338 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" event={"ID":"7afba793-475b-494e-9c36-7e080ebc391b","Type":"ContainerDied","Data":"aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a"} Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136388 4829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf6e3e2a1e6f72f2a82f43f015fe8f23eca05d50ab476176cc09e5ba91fd29a" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.136401 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-szp66" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.279928 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:05 crc kubenswrapper[4829]: E0217 17:30:05.280473 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.952870 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 17:30:05 crc kubenswrapper[4829]: I0217 17:30:05.973318 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-h7tqt"] Feb 17 17:30:06 crc kubenswrapper[4829]: E0217 17:30:06.284080 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:06 crc kubenswrapper[4829]: I0217 17:30:06.292554 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddee5a9-0539-4387-8a52-5a41ca147e35" path="/var/lib/kubelet/pods/8ddee5a9-0539-4387-8a52-5a41ca147e35/volumes" Feb 17 17:30:09 crc kubenswrapper[4829]: E0217 17:30:09.283525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:17 crc kubenswrapper[4829]: E0217 17:30:17.282240 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.279567 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:20 crc kubenswrapper[4829]: E0217 17:30:20.280460 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.345755 4829 scope.go:117] "RemoveContainer" containerID="87e482ef23bb57f1d4a6798f16eaf98b6ce734c85eb70dffa54a6e1571c426fb" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.368183 4829 scope.go:117] "RemoveContainer" containerID="ffff6b2d26175c7db13843c3d1e0facecff3bf68dd516d8014d048e1b97a3919" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.404491 4829 scope.go:117] "RemoveContainer" containerID="bcf1c8409562c09ed78fc314b8b13f9bdad4a95aae316c61aeff47192a538aa0" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.456359 4829 scope.go:117] "RemoveContainer" containerID="829f1e6d25fa8b8039552f1de7e37290fef10a0dc44b3d0d53ca9ef97122cd8e" Feb 17 17:30:20 crc kubenswrapper[4829]: I0217 17:30:20.531482 4829 scope.go:117] "RemoveContainer" containerID="1d62bf70711cfb51cfd46ea523c58b214244ee708f6720e407503e7e33a91fa2" Feb 17 17:30:23 crc kubenswrapper[4829]: E0217 17:30:23.282104 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:31 crc kubenswrapper[4829]: E0217 17:30:31.283170 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:35 crc kubenswrapper[4829]: I0217 17:30:35.279972 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:35 crc kubenswrapper[4829]: E0217 17:30:35.280691 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:38 crc kubenswrapper[4829]: E0217 17:30:38.289632 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:43 crc kubenswrapper[4829]: E0217 17:30:43.281092 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:30:47 crc kubenswrapper[4829]: I0217 17:30:47.279190 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:30:47 crc kubenswrapper[4829]: E0217 17:30:47.280171 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:30:49 crc kubenswrapper[4829]: E0217 17:30:49.283263 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:30:58 crc kubenswrapper[4829]: E0217 17:30:58.293561 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:02 crc kubenswrapper[4829]: I0217 17:31:02.279679 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:02 crc kubenswrapper[4829]: E0217 17:31:02.280520 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:04 crc kubenswrapper[4829]: E0217 17:31:04.280951 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:10 crc kubenswrapper[4829]: E0217 17:31:10.287415 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:13 crc kubenswrapper[4829]: I0217 17:31:13.279674 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:13 crc kubenswrapper[4829]: E0217 17:31:13.280476 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:19 crc kubenswrapper[4829]: E0217 17:31:19.284863 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:22 crc kubenswrapper[4829]: E0217 17:31:22.281611 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:27 crc kubenswrapper[4829]: I0217 17:31:27.279919 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:27 crc kubenswrapper[4829]: E0217 17:31:27.281022 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:32 crc kubenswrapper[4829]: E0217 17:31:32.281440 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:35 crc kubenswrapper[4829]: E0217 17:31:35.283529 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.103833 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" exitCode=0 Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.103898 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmblp/must-gather-bqwqp" event={"ID":"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10","Type":"ContainerDied","Data":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} Feb 17 17:31:36 crc kubenswrapper[4829]: I0217 17:31:36.105737 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:37 crc kubenswrapper[4829]: I0217 17:31:37.009644 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/gather/0.log" Feb 17 17:31:39 crc kubenswrapper[4829]: I0217 17:31:39.279813 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:39 crc kubenswrapper[4829]: E0217 17:31:39.280212 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.498487 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.499248 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bmblp/must-gather-bqwqp" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" containerID="cri-o://9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" gracePeriod=2 Feb 17 17:31:45 crc kubenswrapper[4829]: I0217 17:31:45.513123 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmblp/must-gather-bqwqp"] Feb 17 17:31:45 crc kubenswrapper[4829]: E0217 17:31:45.829237 4829 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd6f0fc_6efb_4c69_8adc_11bfd6242c10.slice/crio-9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd6f0fc_6efb_4c69_8adc_11bfd6242c10.slice/crio-conmon-9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.005895 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/copy/0.log" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.007252 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.130156 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") pod \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.130224 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") pod \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\" (UID: \"cbd6f0fc-6efb-4c69-8adc-11bfd6242c10\") " Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.137969 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz" (OuterVolumeSpecName: "kube-api-access-c7bzz") pod "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" (UID: "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10"). InnerVolumeSpecName "kube-api-access-c7bzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228130 4829 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmblp_must-gather-bqwqp_cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/copy/0.log" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228778 4829 generic.go:334] "Generic (PLEG): container finished" podID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" exitCode=143 Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.228874 4829 scope.go:117] "RemoveContainer" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.229006 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmblp/must-gather-bqwqp" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.233382 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7bzz\" (UniqueName: \"kubernetes.io/projected/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-kube-api-access-c7bzz\") on node \"crc\" DevicePath \"\"" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.255312 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.314938 4829 scope.go:117] "RemoveContainer" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315007 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315466 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": container with ID starting with 9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5 not found: ID does not exist" containerID="9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315537 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5"} err="failed to get container status \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": rpc error: code = NotFound desc = could not find container \"9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5\": container with ID starting with 9e0cf988bef5441b8f6e89a6e70375d620633dc6b095859a678d67bbd7a27ab5 not found: ID does not exist" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315567 4829 scope.go:117] "RemoveContainer" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: E0217 17:31:46.315898 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": container with ID starting with 9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133 not found: ID does not exist" containerID="9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.315921 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133"} err="failed to get container status \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": rpc error: code = NotFound desc = could not find container \"9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133\": container with ID starting with 9c394a1c4f2cf7dd7b57f7c8f8fd5c39febbbd5d70d752c79faabfb16b087133 not found: ID does not exist" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.343490 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" (UID: "cbd6f0fc-6efb-4c69-8adc-11bfd6242c10"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:31:46 crc kubenswrapper[4829]: I0217 17:31:46.438065 4829 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:31:47 crc kubenswrapper[4829]: E0217 17:31:47.283038 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:31:48 crc kubenswrapper[4829]: I0217 17:31:48.294812 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" path="/var/lib/kubelet/pods/cbd6f0fc-6efb-4c69-8adc-11bfd6242c10/volumes" Feb 17 17:31:52 crc kubenswrapper[4829]: I0217 17:31:52.280247 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:31:52 crc kubenswrapper[4829]: E0217 17:31:52.282369 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:00 crc kubenswrapper[4829]: E0217 17:32:00.282497 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:00 crc kubenswrapper[4829]: E0217 17:32:00.283511 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:05 crc kubenswrapper[4829]: I0217 17:32:05.280479 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:05 crc kubenswrapper[4829]: E0217 17:32:05.281367 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:11 crc kubenswrapper[4829]: E0217 17:32:11.281730 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:12 crc kubenswrapper[4829]: E0217 17:32:12.282843 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:20 crc kubenswrapper[4829]: I0217 17:32:20.280342 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:20 crc kubenswrapper[4829]: E0217 17:32:20.281247 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:22 crc kubenswrapper[4829]: E0217 17:32:22.281805 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:27 crc kubenswrapper[4829]: E0217 17:32:27.283929 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:33 crc kubenswrapper[4829]: I0217 17:32:33.280154 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:33 crc kubenswrapper[4829]: E0217 17:32:33.280949 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:36 crc kubenswrapper[4829]: E0217 17:32:36.281844 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:41 crc kubenswrapper[4829]: E0217 17:32:41.282724 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:45 crc kubenswrapper[4829]: I0217 17:32:45.280311 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:32:45 crc kubenswrapper[4829]: E0217 17:32:45.282904 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:32:47 crc kubenswrapper[4829]: E0217 17:32:47.281188 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:32:53 crc kubenswrapper[4829]: E0217 17:32:53.281737 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:32:59 crc kubenswrapper[4829]: E0217 17:32:59.281831 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:00 crc kubenswrapper[4829]: I0217 17:33:00.279380 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:00 crc kubenswrapper[4829]: E0217 17:33:00.280121 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:08 crc kubenswrapper[4829]: E0217 17:33:08.296246 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:10 crc kubenswrapper[4829]: E0217 17:33:10.281222 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:15 crc kubenswrapper[4829]: I0217 17:33:15.280231 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:15 crc kubenswrapper[4829]: E0217 17:33:15.281166 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.797993 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801391 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801548 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801682 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801764 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: E0217 17:33:21.801906 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.801987 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802350 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="gather" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802451 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd6f0fc-6efb-4c69-8adc-11bfd6242c10" containerName="copy" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.802531 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afba793-475b-494e-9c36-7e080ebc391b" containerName="collect-profiles" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.804243 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.821273 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.961633 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.962177 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:21 crc kubenswrapper[4829]: I0217 17:33:21.962325 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.064864 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065071 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065118 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065489 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.065529 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.091892 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"certified-operators-fxkqc\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.183120 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:22 crc kubenswrapper[4829]: E0217 17:33:22.288550 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:22 crc kubenswrapper[4829]: I0217 17:33:22.795380 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:22 crc kubenswrapper[4829]: W0217 17:33:22.797273 4829 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80976c2_e7e3_4ad9_8eb9_6e14939fa5d0.slice/crio-c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0 WatchSource:0}: Error finding container c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0: Status 404 returned error can't find the container with id c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0 Feb 17 17:33:23 crc kubenswrapper[4829]: E0217 17:33:23.281258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355058 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" exitCode=0 Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355139 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428"} Feb 17 17:33:23 crc kubenswrapper[4829]: I0217 17:33:23.355173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0"} Feb 17 17:33:24 crc kubenswrapper[4829]: I0217 17:33:24.368787 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.279713 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:27 crc kubenswrapper[4829]: E0217 17:33:27.280213 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.409084 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" exitCode=0 Feb 17 17:33:27 crc kubenswrapper[4829]: I0217 17:33:27.409163 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} Feb 17 17:33:28 crc kubenswrapper[4829]: I0217 17:33:28.427096 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerStarted","Data":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} Feb 17 17:33:28 crc kubenswrapper[4829]: I0217 17:33:28.453114 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxkqc" podStartSLOduration=2.960580532 podStartE2EDuration="7.453067135s" podCreationTimestamp="2026-02-17 17:33:21 +0000 UTC" firstStartedPulling="2026-02-17 17:33:23.35854669 +0000 UTC m=+5915.775564668" lastFinishedPulling="2026-02-17 17:33:27.851033293 +0000 UTC m=+5920.268051271" observedRunningTime="2026-02-17 17:33:28.445532432 +0000 UTC m=+5920.862550410" watchObservedRunningTime="2026-02-17 17:33:28.453067135 +0000 UTC m=+5920.870085113" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.183993 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.184509 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:32 crc kubenswrapper[4829]: I0217 17:33:32.235223 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:33 crc kubenswrapper[4829]: E0217 17:33:33.282719 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:35 crc kubenswrapper[4829]: E0217 17:33:35.281155 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:40 crc kubenswrapper[4829]: I0217 17:33:40.280037 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:40 crc kubenswrapper[4829]: E0217 17:33:40.281106 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.244825 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.298534 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:42 crc kubenswrapper[4829]: I0217 17:33:42.598768 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxkqc" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" containerID="cri-o://2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" gracePeriod=2 Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.137045 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236358 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236541 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.236815 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") pod \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\" (UID: \"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0\") " Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.238015 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities" (OuterVolumeSpecName: "utilities") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.243978 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5" (OuterVolumeSpecName: "kube-api-access-cz2j5") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "kube-api-access-cz2j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.295785 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" (UID: "f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340214 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340283 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.340300 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz2j5\" (UniqueName: \"kubernetes.io/projected/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0-kube-api-access-cz2j5\") on node \"crc\" DevicePath \"\"" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613272 4829 generic.go:334] "Generic (PLEG): container finished" podID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" exitCode=0 Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613324 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613350 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxkqc" event={"ID":"f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0","Type":"ContainerDied","Data":"c1f71fa8ea14d707e91e6edac6bb7042e6b4b9997e3517d0a443271ccf21c3c0"} Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613367 4829 scope.go:117] "RemoveContainer" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.613366 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxkqc" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.648362 4829 scope.go:117] "RemoveContainer" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.664769 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.671726 4829 scope.go:117] "RemoveContainer" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.676051 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxkqc"] Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.745302 4829 scope.go:117] "RemoveContainer" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.745936 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": container with ID starting with 2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe not found: ID does not exist" containerID="2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746113 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe"} err="failed to get container status \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": rpc error: code = NotFound desc = could not find container \"2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe\": container with ID starting with 2985abc865f3ee85b6e180114bc812e18e102d3bf42f6c3ae7d821e6348d3abe not found: ID does not exist" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746257 4829 scope.go:117] "RemoveContainer" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.746769 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": container with ID starting with 48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a not found: ID does not exist" containerID="48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746802 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a"} err="failed to get container status \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": rpc error: code = NotFound desc = could not find container \"48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a\": container with ID starting with 48bfe7f53c66d1b781b2a562f1f397f389f590f20fd4e2cfde23f161ad8cb05a not found: ID does not exist" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.746828 4829 scope.go:117] "RemoveContainer" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: E0217 17:33:43.747136 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": container with ID starting with ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428 not found: ID does not exist" containerID="ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428" Feb 17 17:33:43 crc kubenswrapper[4829]: I0217 17:33:43.747165 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428"} err="failed to get container status \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": rpc error: code = NotFound desc = could not find container \"ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428\": container with ID starting with ac13ba4bbecfe6a568adc4dadcddbee4140c9fbcf1673dd767a6c07a12837428 not found: ID does not exist" Feb 17 17:33:44 crc kubenswrapper[4829]: I0217 17:33:44.300051 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" path="/var/lib/kubelet/pods/f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0/volumes" Feb 17 17:33:46 crc kubenswrapper[4829]: E0217 17:33:46.285029 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:33:49 crc kubenswrapper[4829]: E0217 17:33:49.282867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:33:51 crc kubenswrapper[4829]: I0217 17:33:51.280214 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:33:51 crc kubenswrapper[4829]: E0217 17:33:51.281525 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:33:59 crc kubenswrapper[4829]: E0217 17:33:59.284945 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:03 crc kubenswrapper[4829]: I0217 17:34:03.279985 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:03 crc kubenswrapper[4829]: E0217 17:34:03.281890 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:04 crc kubenswrapper[4829]: E0217 17:34:04.282994 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:05 crc kubenswrapper[4829]: I0217 17:34:05.309143 4829 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6d69d97dcf-pdd69" podUID="cd5d005a-eb7a-4cbc-932f-2640cb8068eb" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 17:34:11 crc kubenswrapper[4829]: E0217 17:34:11.281675 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:17 crc kubenswrapper[4829]: I0217 17:34:17.282024 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:17 crc kubenswrapper[4829]: I0217 17:34:17.283796 4829 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.285628 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.389894 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.389958 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.390106 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qptzd_openstack(a7091b35-889b-422b-aead-117292847a8a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:17 crc kubenswrapper[4829]: E0217 17:34:17.391427 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:26 crc kubenswrapper[4829]: E0217 17:34:26.282103 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:31 crc kubenswrapper[4829]: I0217 17:34:31.282474 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:31 crc kubenswrapper[4829]: E0217 17:34:31.283258 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:32 crc kubenswrapper[4829]: E0217 17:34:32.282491 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.000533 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001864 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-utilities" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001884 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-utilities" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001906 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001915 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.001945 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-content" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.001955 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="extract-content" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.002263 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80976c2-e7e3-4ad9-8eb9-6e14939fa5d0" containerName="registry-server" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.004540 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.044264 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065689 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065846 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.065874 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168092 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168235 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.168353 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.169267 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.169308 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.195675 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"community-operators-4mlxs\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.335341 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.417783 4829 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.418135 4829 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.418267 4829 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hcbh5fdh54dh589h598h574h5ffhb6h76h5c8h67dhfdh66fh5c5h67bh5d7h88h697hfchd7hf4h8ch575h56dh568hd8h666h55fh67dh6fhb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e01f505e-09de-4b7d-ae8a-b9f392c3b592): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:40 crc kubenswrapper[4829]: E0217 17:34:40.420207 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:40 crc kubenswrapper[4829]: I0217 17:34:40.969408 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.302993 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" exitCode=0 Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.303175 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2"} Feb 17 17:34:41 crc kubenswrapper[4829]: I0217 17:34:41.303230 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"270179ade4b11a7d177cfee64fe4570654b2234b20cc90c73fa23cd98e67c217"} Feb 17 17:34:43 crc kubenswrapper[4829]: E0217 17:34:43.282070 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:34:43 crc kubenswrapper[4829]: I0217 17:34:43.351619 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} Feb 17 17:34:45 crc kubenswrapper[4829]: I0217 17:34:45.374079 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" exitCode=0 Feb 17 17:34:45 crc kubenswrapper[4829]: I0217 17:34:45.374173 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.280095 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:34:46 crc kubenswrapper[4829]: E0217 17:34:46.280771 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fzwcw_openshift-machine-config-operator(fbb42864-7e0c-40a9-a14a-5f4155ed0e94)\"" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" podUID="fbb42864-7e0c-40a9-a14a-5f4155ed0e94" Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.387103 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerStarted","Data":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} Feb 17 17:34:46 crc kubenswrapper[4829]: I0217 17:34:46.412935 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mlxs" podStartSLOduration=2.941111277 podStartE2EDuration="7.412916501s" podCreationTimestamp="2026-02-17 17:34:39 +0000 UTC" firstStartedPulling="2026-02-17 17:34:41.3061272 +0000 UTC m=+5993.723145178" lastFinishedPulling="2026-02-17 17:34:45.777932424 +0000 UTC m=+5998.194950402" observedRunningTime="2026-02-17 17:34:46.40251219 +0000 UTC m=+5998.819530178" watchObservedRunningTime="2026-02-17 17:34:46.412916501 +0000 UTC m=+5998.829934479" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.335985 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.336512 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:50 crc kubenswrapper[4829]: I0217 17:34:50.392103 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:34:51 crc kubenswrapper[4829]: E0217 17:34:51.281912 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:34:56 crc kubenswrapper[4829]: E0217 17:34:56.283182 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.279683 4829 scope.go:117] "RemoveContainer" containerID="2fdacc5c721bee53b596aef192187886398295d351544bb6363eccc5d482bb39" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.395348 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.460373 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:00 crc kubenswrapper[4829]: I0217 17:35:00.576282 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mlxs" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" containerID="cri-o://3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" gracePeriod=2 Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.117561 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.315779 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.316248 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.316557 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") pod \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\" (UID: \"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a\") " Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.317909 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities" (OuterVolumeSpecName: "utilities") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.321786 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96" (OuterVolumeSpecName: "kube-api-access-k2s96") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "kube-api-access-k2s96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.366930 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" (UID: "4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420426 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420476 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.420487 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2s96\" (UniqueName: \"kubernetes.io/projected/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a-kube-api-access-k2s96\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.588334 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fzwcw" event={"ID":"fbb42864-7e0c-40a9-a14a-5f4155ed0e94","Type":"ContainerStarted","Data":"671f1cb3fbc562660eb7c1e1869f59b0a300c8fa64e35695004296799dbe493d"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590534 4829 generic.go:334] "Generic (PLEG): container finished" podID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" exitCode=0 Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590587 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590611 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mlxs" event={"ID":"4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a","Type":"ContainerDied","Data":"270179ade4b11a7d177cfee64fe4570654b2234b20cc90c73fa23cd98e67c217"} Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590630 4829 scope.go:117] "RemoveContainer" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.590691 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mlxs" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.615629 4829 scope.go:117] "RemoveContainer" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.655915 4829 scope.go:117] "RemoveContainer" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.688834 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.709218 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mlxs"] Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.720426 4829 scope.go:117] "RemoveContainer" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.721564 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": container with ID starting with 3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044 not found: ID does not exist" containerID="3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.721627 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044"} err="failed to get container status \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": rpc error: code = NotFound desc = could not find container \"3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044\": container with ID starting with 3a1b9cea4ce22c0885786a1abab7478b0b52f509c9dae869a42363659e95c044 not found: ID does not exist" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.721652 4829 scope.go:117] "RemoveContainer" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.724643 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": container with ID starting with 98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77 not found: ID does not exist" containerID="98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.724803 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77"} err="failed to get container status \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": rpc error: code = NotFound desc = could not find container \"98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77\": container with ID starting with 98b2b3eb5b32b89c5d08f1dd6f08ee28139da2583a3d4b5336d80e67fdc52a77 not found: ID does not exist" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.724944 4829 scope.go:117] "RemoveContainer" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: E0217 17:35:01.725772 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": container with ID starting with cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2 not found: ID does not exist" containerID="cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2" Feb 17 17:35:01 crc kubenswrapper[4829]: I0217 17:35:01.725805 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2"} err="failed to get container status \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": rpc error: code = NotFound desc = could not find container \"cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2\": container with ID starting with cf357ef41c963c6fb5f701387ac25c2f91ef5f8456a81f78ddbc54c56e8e01a2 not found: ID does not exist" Feb 17 17:35:02 crc kubenswrapper[4829]: I0217 17:35:02.298663 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" path="/var/lib/kubelet/pods/4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a/volumes" Feb 17 17:35:04 crc kubenswrapper[4829]: E0217 17:35:04.281618 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:08 crc kubenswrapper[4829]: E0217 17:35:08.299930 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:15 crc kubenswrapper[4829]: E0217 17:35:15.282937 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.829509 4829 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830768 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-utilities" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830789 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-utilities" Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830832 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-content" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830841 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="extract-content" Feb 17 17:35:17 crc kubenswrapper[4829]: E0217 17:35:17.830870 4829 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.830880 4829 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.831163 4829 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3c5236-ad88-4cc5-83ab-6fc6c45c4e2a" containerName="registry-server" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.833246 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.851523 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861534 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861708 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.861887 4829 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.964658 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965114 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965177 4829 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965562 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.965619 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:17 crc kubenswrapper[4829]: I0217 17:35:17.983526 4829 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"redhat-operators-dqswj\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:18 crc kubenswrapper[4829]: I0217 17:35:18.159380 4829 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:18 crc kubenswrapper[4829]: I0217 17:35:18.686481 4829 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174716 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" exitCode=0 Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174818 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5"} Feb 17 17:35:19 crc kubenswrapper[4829]: I0217 17:35:19.174959 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"b7f14e59773190d0d34da9bcb850d95b1c5c18a49c66d9e83683819501e4e491"} Feb 17 17:35:20 crc kubenswrapper[4829]: I0217 17:35:20.187131 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} Feb 17 17:35:20 crc kubenswrapper[4829]: E0217 17:35:20.281780 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:25 crc kubenswrapper[4829]: I0217 17:35:25.243522 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" exitCode=0 Feb 17 17:35:25 crc kubenswrapper[4829]: I0217 17:35:25.243656 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} Feb 17 17:35:26 crc kubenswrapper[4829]: I0217 17:35:26.256675 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerStarted","Data":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} Feb 17 17:35:26 crc kubenswrapper[4829]: I0217 17:35:26.283610 4829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqswj" podStartSLOduration=2.729500739 podStartE2EDuration="9.283590546s" podCreationTimestamp="2026-02-17 17:35:17 +0000 UTC" firstStartedPulling="2026-02-17 17:35:19.177431433 +0000 UTC m=+6031.594449411" lastFinishedPulling="2026-02-17 17:35:25.73152124 +0000 UTC m=+6038.148539218" observedRunningTime="2026-02-17 17:35:26.274227713 +0000 UTC m=+6038.691245701" watchObservedRunningTime="2026-02-17 17:35:26.283590546 +0000 UTC m=+6038.700608534" Feb 17 17:35:28 crc kubenswrapper[4829]: I0217 17:35:28.160121 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:28 crc kubenswrapper[4829]: I0217 17:35:28.160464 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:29 crc kubenswrapper[4829]: I0217 17:35:29.223196 4829 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqswj" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerName="registry-server" probeResult="failure" output=< Feb 17 17:35:29 crc kubenswrapper[4829]: timeout: failed to connect service ":50051" within 1s Feb 17 17:35:29 crc kubenswrapper[4829]: > Feb 17 17:35:30 crc kubenswrapper[4829]: E0217 17:35:30.283087 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:35 crc kubenswrapper[4829]: E0217 17:35:35.282456 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.219354 4829 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.274906 4829 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:38 crc kubenswrapper[4829]: I0217 17:35:38.463057 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:39 crc kubenswrapper[4829]: I0217 17:35:39.394763 4829 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqswj" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerName="registry-server" containerID="cri-o://1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" gracePeriod=2 Feb 17 17:35:39 crc kubenswrapper[4829]: I0217 17:35:39.986398 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.121612 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.122029 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.122068 4829 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") pod \"a485b000-0c0b-48e7-9286-f8e155eb02cf\" (UID: \"a485b000-0c0b-48e7-9286-f8e155eb02cf\") " Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.126560 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities" (OuterVolumeSpecName: "utilities") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.146840 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld" (OuterVolumeSpecName: "kube-api-access-8lxld") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "kube-api-access-8lxld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.229753 4829 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.230006 4829 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lxld\" (UniqueName: \"kubernetes.io/projected/a485b000-0c0b-48e7-9286-f8e155eb02cf-kube-api-access-8lxld\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.295787 4829 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a485b000-0c0b-48e7-9286-f8e155eb02cf" (UID: "a485b000-0c0b-48e7-9286-f8e155eb02cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.331932 4829 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a485b000-0c0b-48e7-9286-f8e155eb02cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407202 4829 generic.go:334] "Generic (PLEG): container finished" podID="a485b000-0c0b-48e7-9286-f8e155eb02cf" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" exitCode=0 Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407246 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407293 4829 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqswj" event={"ID":"a485b000-0c0b-48e7-9286-f8e155eb02cf","Type":"ContainerDied","Data":"b7f14e59773190d0d34da9bcb850d95b1c5c18a49c66d9e83683819501e4e491"} Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407319 4829 scope.go:117] "RemoveContainer" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.407304 4829 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqswj" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.452767 4829 scope.go:117] "RemoveContainer" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.479357 4829 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.497057 4829 scope.go:117] "RemoveContainer" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.500419 4829 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqswj"] Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.566172 4829 scope.go:117] "RemoveContainer" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.567064 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": container with ID starting with 1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409 not found: ID does not exist" containerID="1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567132 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409"} err="failed to get container status \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": rpc error: code = NotFound desc = could not find container \"1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409\": container with ID starting with 1c8d838c21b3e7148948237c9a721e652dfd5154b3f3a39554bc8aebba729409 not found: ID does not exist" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567204 4829 scope.go:117] "RemoveContainer" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.567726 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": container with ID starting with 8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679 not found: ID does not exist" containerID="8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567800 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679"} err="failed to get container status \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": rpc error: code = NotFound desc = could not find container \"8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679\": container with ID starting with 8d541a0e245cf9bdaa6371964d88d6faf00c1c388018a1bcad5b453d1a31d679 not found: ID does not exist" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.567863 4829 scope.go:117] "RemoveContainer" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: E0217 17:35:40.568146 4829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": container with ID starting with fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5 not found: ID does not exist" containerID="fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5" Feb 17 17:35:40 crc kubenswrapper[4829]: I0217 17:35:40.568195 4829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5"} err="failed to get container status \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": rpc error: code = NotFound desc = could not find container \"fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5\": container with ID starting with fa9c1ff5800e1d799a55d5f54203fd1f88908568315b159845d6b821191358d5 not found: ID does not exist" Feb 17 17:35:42 crc kubenswrapper[4829]: I0217 17:35:42.292607 4829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a485b000-0c0b-48e7-9286-f8e155eb02cf" path="/var/lib/kubelet/pods/a485b000-0c0b-48e7-9286-f8e155eb02cf/volumes" Feb 17 17:35:44 crc kubenswrapper[4829]: E0217 17:35:44.282910 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592" Feb 17 17:35:47 crc kubenswrapper[4829]: E0217 17:35:47.282353 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:58 crc kubenswrapper[4829]: E0217 17:35:58.290232 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qptzd" podUID="a7091b35-889b-422b-aead-117292847a8a" Feb 17 17:35:59 crc kubenswrapper[4829]: E0217 17:35:59.281867 4829 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e01f505e-09de-4b7d-ae8a-b9f392c3b592"